Test Report: KVM_Linux_crio 19356

                    
                      904aab08df45b60a074395618a72550fbda0cd8b:2024-07-31:35586
                    
                

Test fail (13/278)

x
+
TestAddons/parallel/Ingress (159.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-469211 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-469211 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-469211 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7a453bb4-7b63-4a8b-b605-225347030b7b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7a453bb4-7b63-4a8b-b605-225347030b7b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.006466483s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-469211 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.650345771s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-469211 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.187
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 addons disable ingress-dns --alsologtostderr -v=1: (1.661242746s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 addons disable ingress --alsologtostderr -v=1: (7.697679417s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-469211 -n addons-469211
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 logs -n 25: (1.247614856s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-445232                                                                     | download-only-445232 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| delete  | -p download-only-127403                                                                     | download-only-127403 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-995532 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | binary-mirror-995532                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39497                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-995532                                                                     | binary-mirror-995532 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| addons  | enable dashboard -p                                                                         | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-469211 --wait=true                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:20 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:20 UTC | 31 Jul 24 18:20 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:20 UTC | 31 Jul 24 18:20 UTC |
	|         | -p addons-469211                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:20 UTC | 31 Jul 24 18:20 UTC |
	|         | -p addons-469211                                                                            |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-469211 ssh cat                                                                       | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | /opt/local-path-provisioner/pvc-81750708-88a8-4465-b0b3-553afcc3b33e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-469211 ip                                                                            | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-469211 ssh curl -s                                                                   | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-469211 addons                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-469211 addons                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:22 UTC | 31 Jul 24 18:22 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-469211 ip                                                                            | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:23 UTC | 31 Jul 24 18:23 UTC |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:23 UTC | 31 Jul 24 18:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:23 UTC | 31 Jul 24 18:24 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:16:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:16:57.860730  403525 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:16:57.860851  403525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:16:57.860861  403525 out.go:304] Setting ErrFile to fd 2...
	I0731 18:16:57.860865  403525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:16:57.861074  403525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:16:57.861736  403525 out.go:298] Setting JSON to false
	I0731 18:16:57.862670  403525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7161,"bootTime":1722442657,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:16:57.862737  403525 start.go:139] virtualization: kvm guest
	I0731 18:16:57.865057  403525 out.go:177] * [addons-469211] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:16:57.866513  403525 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:16:57.866561  403525 notify.go:220] Checking for updates...
	I0731 18:16:57.869388  403525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:16:57.870865  403525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:16:57.872231  403525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:57.873639  403525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:16:57.875087  403525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:16:57.876630  403525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:16:57.908294  403525 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 18:16:57.909524  403525 start.go:297] selected driver: kvm2
	I0731 18:16:57.909537  403525 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:16:57.909549  403525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:16:57.910288  403525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:16:57.910369  403525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:16:57.926051  403525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:16:57.926113  403525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:16:57.926352  403525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:16:57.926382  403525 cni.go:84] Creating CNI manager for ""
	I0731 18:16:57.926391  403525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:16:57.926405  403525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:16:57.926482  403525 start.go:340] cluster config:
	{Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:16:57.926578  403525 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:16:57.929283  403525 out.go:177] * Starting "addons-469211" primary control-plane node in "addons-469211" cluster
	I0731 18:16:57.930889  403525 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:16:57.930935  403525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:16:57.930943  403525 cache.go:56] Caching tarball of preloaded images
	I0731 18:16:57.931033  403525 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:16:57.931043  403525 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:16:57.931398  403525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/config.json ...
	I0731 18:16:57.931423  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/config.json: {Name:mkde003688e571a7e4f73417fb328fe2240f62d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:16:57.931561  403525 start.go:360] acquireMachinesLock for addons-469211: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:16:57.931603  403525 start.go:364] duration metric: took 29.403µs to acquireMachinesLock for "addons-469211"
	I0731 18:16:57.931621  403525 start.go:93] Provisioning new machine with config: &{Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:16:57.931690  403525 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 18:16:57.933591  403525 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 18:16:57.933739  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:16:57.933788  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:16:57.948702  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I0731 18:16:57.949234  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:16:57.949863  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:16:57.949886  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:16:57.950307  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:16:57.950492  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:16:57.950636  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:16:57.950794  403525 start.go:159] libmachine.API.Create for "addons-469211" (driver="kvm2")
	I0731 18:16:57.950824  403525 client.go:168] LocalClient.Create starting
	I0731 18:16:57.950889  403525 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:16:58.183978  403525 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:16:58.530021  403525 main.go:141] libmachine: Running pre-create checks...
	I0731 18:16:58.530048  403525 main.go:141] libmachine: (addons-469211) Calling .PreCreateCheck
	I0731 18:16:58.530577  403525 main.go:141] libmachine: (addons-469211) Calling .GetConfigRaw
	I0731 18:16:58.531091  403525 main.go:141] libmachine: Creating machine...
	I0731 18:16:58.531107  403525 main.go:141] libmachine: (addons-469211) Calling .Create
	I0731 18:16:58.531237  403525 main.go:141] libmachine: (addons-469211) Creating KVM machine...
	I0731 18:16:58.532511  403525 main.go:141] libmachine: (addons-469211) DBG | found existing default KVM network
	I0731 18:16:58.533337  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:58.533166  403547 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0731 18:16:58.533354  403525 main.go:141] libmachine: (addons-469211) DBG | created network xml: 
	I0731 18:16:58.533364  403525 main.go:141] libmachine: (addons-469211) DBG | <network>
	I0731 18:16:58.533369  403525 main.go:141] libmachine: (addons-469211) DBG |   <name>mk-addons-469211</name>
	I0731 18:16:58.533382  403525 main.go:141] libmachine: (addons-469211) DBG |   <dns enable='no'/>
	I0731 18:16:58.533389  403525 main.go:141] libmachine: (addons-469211) DBG |   
	I0731 18:16:58.533399  403525 main.go:141] libmachine: (addons-469211) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 18:16:58.533409  403525 main.go:141] libmachine: (addons-469211) DBG |     <dhcp>
	I0731 18:16:58.533420  403525 main.go:141] libmachine: (addons-469211) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 18:16:58.533431  403525 main.go:141] libmachine: (addons-469211) DBG |     </dhcp>
	I0731 18:16:58.533440  403525 main.go:141] libmachine: (addons-469211) DBG |   </ip>
	I0731 18:16:58.533454  403525 main.go:141] libmachine: (addons-469211) DBG |   
	I0731 18:16:58.533460  403525 main.go:141] libmachine: (addons-469211) DBG | </network>
	I0731 18:16:58.533464  403525 main.go:141] libmachine: (addons-469211) DBG | 
	I0731 18:16:58.539069  403525 main.go:141] libmachine: (addons-469211) DBG | trying to create private KVM network mk-addons-469211 192.168.39.0/24...
	I0731 18:16:58.603615  403525 main.go:141] libmachine: (addons-469211) DBG | private KVM network mk-addons-469211 192.168.39.0/24 created
	I0731 18:16:58.603648  403525 main.go:141] libmachine: (addons-469211) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211 ...
	I0731 18:16:58.603690  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:58.603623  403547 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:58.603711  403525 main.go:141] libmachine: (addons-469211) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:16:58.603769  403525 main.go:141] libmachine: (addons-469211) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:16:58.882586  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:58.882358  403547 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa...
	I0731 18:16:59.075233  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:59.075092  403547 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/addons-469211.rawdisk...
	I0731 18:16:59.075264  403525 main.go:141] libmachine: (addons-469211) DBG | Writing magic tar header
	I0731 18:16:59.075281  403525 main.go:141] libmachine: (addons-469211) DBG | Writing SSH key tar header
	I0731 18:16:59.075298  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:59.075211  403547 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211 ...
	I0731 18:16:59.075313  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211
	I0731 18:16:59.075352  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:16:59.075370  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211 (perms=drwx------)
	I0731 18:16:59.075382  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:59.075439  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:16:59.075471  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:16:59.075482  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:16:59.075497  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:16:59.075521  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:16:59.075535  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:16:59.075545  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:16:59.075554  403525 main.go:141] libmachine: (addons-469211) Creating domain...
	I0731 18:16:59.075564  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:16:59.075576  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home
	I0731 18:16:59.075588  403525 main.go:141] libmachine: (addons-469211) DBG | Skipping /home - not owner
	I0731 18:16:59.076534  403525 main.go:141] libmachine: (addons-469211) define libvirt domain using xml: 
	I0731 18:16:59.076563  403525 main.go:141] libmachine: (addons-469211) <domain type='kvm'>
	I0731 18:16:59.076569  403525 main.go:141] libmachine: (addons-469211)   <name>addons-469211</name>
	I0731 18:16:59.076578  403525 main.go:141] libmachine: (addons-469211)   <memory unit='MiB'>4000</memory>
	I0731 18:16:59.076585  403525 main.go:141] libmachine: (addons-469211)   <vcpu>2</vcpu>
	I0731 18:16:59.076592  403525 main.go:141] libmachine: (addons-469211)   <features>
	I0731 18:16:59.076597  403525 main.go:141] libmachine: (addons-469211)     <acpi/>
	I0731 18:16:59.076602  403525 main.go:141] libmachine: (addons-469211)     <apic/>
	I0731 18:16:59.076607  403525 main.go:141] libmachine: (addons-469211)     <pae/>
	I0731 18:16:59.076613  403525 main.go:141] libmachine: (addons-469211)     
	I0731 18:16:59.076618  403525 main.go:141] libmachine: (addons-469211)   </features>
	I0731 18:16:59.076623  403525 main.go:141] libmachine: (addons-469211)   <cpu mode='host-passthrough'>
	I0731 18:16:59.076637  403525 main.go:141] libmachine: (addons-469211)   
	I0731 18:16:59.076647  403525 main.go:141] libmachine: (addons-469211)   </cpu>
	I0731 18:16:59.076657  403525 main.go:141] libmachine: (addons-469211)   <os>
	I0731 18:16:59.076669  403525 main.go:141] libmachine: (addons-469211)     <type>hvm</type>
	I0731 18:16:59.076683  403525 main.go:141] libmachine: (addons-469211)     <boot dev='cdrom'/>
	I0731 18:16:59.076696  403525 main.go:141] libmachine: (addons-469211)     <boot dev='hd'/>
	I0731 18:16:59.076704  403525 main.go:141] libmachine: (addons-469211)     <bootmenu enable='no'/>
	I0731 18:16:59.076714  403525 main.go:141] libmachine: (addons-469211)   </os>
	I0731 18:16:59.076720  403525 main.go:141] libmachine: (addons-469211)   <devices>
	I0731 18:16:59.076727  403525 main.go:141] libmachine: (addons-469211)     <disk type='file' device='cdrom'>
	I0731 18:16:59.076737  403525 main.go:141] libmachine: (addons-469211)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/boot2docker.iso'/>
	I0731 18:16:59.076744  403525 main.go:141] libmachine: (addons-469211)       <target dev='hdc' bus='scsi'/>
	I0731 18:16:59.076753  403525 main.go:141] libmachine: (addons-469211)       <readonly/>
	I0731 18:16:59.076765  403525 main.go:141] libmachine: (addons-469211)     </disk>
	I0731 18:16:59.076777  403525 main.go:141] libmachine: (addons-469211)     <disk type='file' device='disk'>
	I0731 18:16:59.076811  403525 main.go:141] libmachine: (addons-469211)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:16:59.076824  403525 main.go:141] libmachine: (addons-469211)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/addons-469211.rawdisk'/>
	I0731 18:16:59.076830  403525 main.go:141] libmachine: (addons-469211)       <target dev='hda' bus='virtio'/>
	I0731 18:16:59.076837  403525 main.go:141] libmachine: (addons-469211)     </disk>
	I0731 18:16:59.076849  403525 main.go:141] libmachine: (addons-469211)     <interface type='network'>
	I0731 18:16:59.076864  403525 main.go:141] libmachine: (addons-469211)       <source network='mk-addons-469211'/>
	I0731 18:16:59.076875  403525 main.go:141] libmachine: (addons-469211)       <model type='virtio'/>
	I0731 18:16:59.076885  403525 main.go:141] libmachine: (addons-469211)     </interface>
	I0731 18:16:59.076908  403525 main.go:141] libmachine: (addons-469211)     <interface type='network'>
	I0731 18:16:59.076920  403525 main.go:141] libmachine: (addons-469211)       <source network='default'/>
	I0731 18:16:59.076933  403525 main.go:141] libmachine: (addons-469211)       <model type='virtio'/>
	I0731 18:16:59.076944  403525 main.go:141] libmachine: (addons-469211)     </interface>
	I0731 18:16:59.076955  403525 main.go:141] libmachine: (addons-469211)     <serial type='pty'>
	I0731 18:16:59.076966  403525 main.go:141] libmachine: (addons-469211)       <target port='0'/>
	I0731 18:16:59.076974  403525 main.go:141] libmachine: (addons-469211)     </serial>
	I0731 18:16:59.076987  403525 main.go:141] libmachine: (addons-469211)     <console type='pty'>
	I0731 18:16:59.077000  403525 main.go:141] libmachine: (addons-469211)       <target type='serial' port='0'/>
	I0731 18:16:59.077009  403525 main.go:141] libmachine: (addons-469211)     </console>
	I0731 18:16:59.077026  403525 main.go:141] libmachine: (addons-469211)     <rng model='virtio'>
	I0731 18:16:59.077049  403525 main.go:141] libmachine: (addons-469211)       <backend model='random'>/dev/random</backend>
	I0731 18:16:59.077073  403525 main.go:141] libmachine: (addons-469211)     </rng>
	I0731 18:16:59.077090  403525 main.go:141] libmachine: (addons-469211)     
	I0731 18:16:59.077104  403525 main.go:141] libmachine: (addons-469211)     
	I0731 18:16:59.077119  403525 main.go:141] libmachine: (addons-469211)   </devices>
	I0731 18:16:59.077130  403525 main.go:141] libmachine: (addons-469211) </domain>
	I0731 18:16:59.077139  403525 main.go:141] libmachine: (addons-469211) 
	I0731 18:16:59.083139  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:6a:41:e1 in network default
	I0731 18:16:59.083690  403525 main.go:141] libmachine: (addons-469211) Ensuring networks are active...
	I0731 18:16:59.083708  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:16:59.084301  403525 main.go:141] libmachine: (addons-469211) Ensuring network default is active
	I0731 18:16:59.084599  403525 main.go:141] libmachine: (addons-469211) Ensuring network mk-addons-469211 is active
	I0731 18:16:59.085079  403525 main.go:141] libmachine: (addons-469211) Getting domain xml...
	I0731 18:16:59.085694  403525 main.go:141] libmachine: (addons-469211) Creating domain...
	I0731 18:17:00.522102  403525 main.go:141] libmachine: (addons-469211) Waiting to get IP...
	I0731 18:17:00.522961  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:00.523409  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:00.523435  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:00.523394  403547 retry.go:31] will retry after 255.733272ms: waiting for machine to come up
	I0731 18:17:00.781047  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:00.781769  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:00.781798  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:00.781710  403547 retry.go:31] will retry after 348.448819ms: waiting for machine to come up
	I0731 18:17:01.131221  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:01.131642  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:01.131674  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:01.131583  403547 retry.go:31] will retry after 470.018453ms: waiting for machine to come up
	I0731 18:17:01.603271  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:01.603761  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:01.603794  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:01.603707  403547 retry.go:31] will retry after 465.247494ms: waiting for machine to come up
	I0731 18:17:02.070353  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:02.070784  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:02.070868  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:02.070764  403547 retry.go:31] will retry after 524.894257ms: waiting for machine to come up
	I0731 18:17:02.597587  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:02.597993  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:02.598023  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:02.597937  403547 retry.go:31] will retry after 918.935628ms: waiting for machine to come up
	I0731 18:17:03.518773  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:03.519126  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:03.519179  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:03.519075  403547 retry.go:31] will retry after 906.928454ms: waiting for machine to come up
	I0731 18:17:04.427174  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:04.427537  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:04.427568  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:04.427472  403547 retry.go:31] will retry after 1.311363775s: waiting for machine to come up
	I0731 18:17:05.740966  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:05.741455  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:05.741488  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:05.741360  403547 retry.go:31] will retry after 1.50986554s: waiting for machine to come up
	I0731 18:17:07.252971  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:07.253336  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:07.253369  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:07.253269  403547 retry.go:31] will retry after 1.760852072s: waiting for machine to come up
	I0731 18:17:09.016358  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:09.016787  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:09.016821  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:09.016720  403547 retry.go:31] will retry after 1.866108056s: waiting for machine to come up
	I0731 18:17:10.885962  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:10.886352  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:10.886379  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:10.886339  403547 retry.go:31] will retry after 3.530188806s: waiting for machine to come up
	I0731 18:17:14.418449  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:14.418895  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:14.418926  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:14.418858  403547 retry.go:31] will retry after 3.789908324s: waiting for machine to come up
	I0731 18:17:18.210719  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:18.211150  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:18.211176  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:18.211114  403547 retry.go:31] will retry after 4.872628016s: waiting for machine to come up
	I0731 18:17:23.086081  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.086521  403525 main.go:141] libmachine: (addons-469211) Found IP for machine: 192.168.39.187
	I0731 18:17:23.086548  403525 main.go:141] libmachine: (addons-469211) Reserving static IP address...
	I0731 18:17:23.086577  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has current primary IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.086962  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find host DHCP lease matching {name: "addons-469211", mac: "52:54:00:62:76:b3", ip: "192.168.39.187"} in network mk-addons-469211
	I0731 18:17:23.159724  403525 main.go:141] libmachine: (addons-469211) DBG | Getting to WaitForSSH function...
	I0731 18:17:23.159761  403525 main.go:141] libmachine: (addons-469211) Reserved static IP address: 192.168.39.187
	I0731 18:17:23.159776  403525 main.go:141] libmachine: (addons-469211) Waiting for SSH to be available...
	I0731 18:17:23.162036  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.162481  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.162517  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.162627  403525 main.go:141] libmachine: (addons-469211) DBG | Using SSH client type: external
	I0731 18:17:23.162656  403525 main.go:141] libmachine: (addons-469211) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa (-rw-------)
	I0731 18:17:23.162687  403525 main.go:141] libmachine: (addons-469211) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:17:23.162703  403525 main.go:141] libmachine: (addons-469211) DBG | About to run SSH command:
	I0731 18:17:23.162717  403525 main.go:141] libmachine: (addons-469211) DBG | exit 0
	I0731 18:17:23.289189  403525 main.go:141] libmachine: (addons-469211) DBG | SSH cmd err, output: <nil>: 
	I0731 18:17:23.289505  403525 main.go:141] libmachine: (addons-469211) KVM machine creation complete!
	I0731 18:17:23.289795  403525 main.go:141] libmachine: (addons-469211) Calling .GetConfigRaw
	I0731 18:17:23.290474  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:23.290690  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:23.290897  403525 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:17:23.290915  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:23.292447  403525 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:17:23.292473  403525 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:17:23.292487  403525 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:17:23.292495  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.294621  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.294946  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.294976  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.295079  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.295275  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.295456  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.295612  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.295794  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.296019  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.296032  403525 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:17:23.404003  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:17:23.404025  403525 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:17:23.404033  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.407001  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.407394  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.407428  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.407643  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.407849  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.408067  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.408200  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.408363  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.408588  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.408604  403525 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:17:23.517404  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:17:23.517516  403525 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:17:23.517531  403525 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:17:23.517544  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:17:23.517807  403525 buildroot.go:166] provisioning hostname "addons-469211"
	I0731 18:17:23.517839  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:17:23.518073  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.520809  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.521240  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.521271  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.521547  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.521761  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.521955  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.522094  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.522299  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.522537  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.522554  403525 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-469211 && echo "addons-469211" | sudo tee /etc/hostname
	I0731 18:17:23.646450  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-469211
	
	I0731 18:17:23.646475  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.649397  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.649705  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.649730  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.649905  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.650137  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.650293  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.650419  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.650555  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.650735  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.650757  403525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-469211' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-469211/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-469211' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:17:23.770332  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:17:23.770388  403525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:17:23.770422  403525 buildroot.go:174] setting up certificates
	I0731 18:17:23.770443  403525 provision.go:84] configureAuth start
	I0731 18:17:23.770459  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:17:23.770736  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:23.773283  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.773714  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.773744  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.773915  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.776287  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.776684  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.776710  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.776897  403525 provision.go:143] copyHostCerts
	I0731 18:17:23.776979  403525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:17:23.777128  403525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:17:23.777216  403525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:17:23.777290  403525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.addons-469211 san=[127.0.0.1 192.168.39.187 addons-469211 localhost minikube]
	I0731 18:17:23.893530  403525 provision.go:177] copyRemoteCerts
	I0731 18:17:23.893600  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:17:23.893635  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.896263  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.896622  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.896649  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.896870  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.897078  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.897234  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.897366  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:23.984714  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:17:24.011612  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 18:17:24.037480  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:17:24.063900  403525 provision.go:87] duration metric: took 293.437893ms to configureAuth
	I0731 18:17:24.063933  403525 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:17:24.064154  403525 config.go:182] Loaded profile config "addons-469211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:17:24.064251  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.066992  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.067352  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.067389  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.067609  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.067846  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.068017  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.068201  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.068359  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:24.068564  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:24.068584  403525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:17:24.346387  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:17:24.346420  403525 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:17:24.346431  403525 main.go:141] libmachine: (addons-469211) Calling .GetURL
	I0731 18:17:24.347933  403525 main.go:141] libmachine: (addons-469211) DBG | Using libvirt version 6000000
	I0731 18:17:24.350145  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.350486  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.350518  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.350650  403525 main.go:141] libmachine: Docker is up and running!
	I0731 18:17:24.350685  403525 main.go:141] libmachine: Reticulating splines...
	I0731 18:17:24.350694  403525 client.go:171] duration metric: took 26.399860648s to LocalClient.Create
	I0731 18:17:24.350723  403525 start.go:167] duration metric: took 26.399928962s to libmachine.API.Create "addons-469211"
	I0731 18:17:24.350738  403525 start.go:293] postStartSetup for "addons-469211" (driver="kvm2")
	I0731 18:17:24.350753  403525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:17:24.350778  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.351056  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:17:24.351088  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.353363  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.353717  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.353736  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.353900  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.354096  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.354265  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.354445  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:24.440250  403525 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:17:24.445334  403525 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:17:24.445390  403525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:17:24.445486  403525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:17:24.445517  403525 start.go:296] duration metric: took 94.769189ms for postStartSetup
	I0731 18:17:24.445564  403525 main.go:141] libmachine: (addons-469211) Calling .GetConfigRaw
	I0731 18:17:24.446215  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:24.448911  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.449261  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.449291  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.449523  403525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/config.json ...
	I0731 18:17:24.449742  403525 start.go:128] duration metric: took 26.518040171s to createHost
	I0731 18:17:24.449767  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.452287  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.452604  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.452637  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.452766  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.452964  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.453130  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.453280  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.453412  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:24.453572  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:24.453582  403525 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:17:24.561454  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449844.540035441
	
	I0731 18:17:24.561482  403525 fix.go:216] guest clock: 1722449844.540035441
	I0731 18:17:24.561490  403525 fix.go:229] Guest: 2024-07-31 18:17:24.540035441 +0000 UTC Remote: 2024-07-31 18:17:24.449755382 +0000 UTC m=+26.623671337 (delta=90.280059ms)
	I0731 18:17:24.561531  403525 fix.go:200] guest clock delta is within tolerance: 90.280059ms
	I0731 18:17:24.561541  403525 start.go:83] releasing machines lock for "addons-469211", held for 26.629924175s
	I0731 18:17:24.561573  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.562035  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:24.565002  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.565288  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.565309  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.565549  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.566008  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.566226  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.566348  403525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:17:24.566401  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.566479  403525 ssh_runner.go:195] Run: cat /version.json
	I0731 18:17:24.566508  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.569251  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569301  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569690  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.569717  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569751  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.569769  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569847  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.570005  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.570106  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.570170  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.570234  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.570312  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:24.570396  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.570558  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:24.671372  403525 ssh_runner.go:195] Run: systemctl --version
	I0731 18:17:24.677631  403525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:17:24.845020  403525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:17:24.851012  403525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:17:24.851107  403525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:17:24.867371  403525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:17:24.867405  403525 start.go:495] detecting cgroup driver to use...
	I0731 18:17:24.867530  403525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:17:24.884978  403525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:17:24.899614  403525 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:17:24.899697  403525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:17:24.914472  403525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:17:24.928564  403525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:17:25.044847  403525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:17:25.207671  403525 docker.go:233] disabling docker service ...
	I0731 18:17:25.207770  403525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:17:25.222696  403525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:17:25.236238  403525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:17:25.360802  403525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:17:25.483581  403525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:17:25.498471  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:17:25.517487  403525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:17:25.517569  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.529166  403525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:17:25.529252  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.540411  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.551586  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.562618  403525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:17:25.574388  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.585608  403525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.603234  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.615010  403525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:17:25.625038  403525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:17:25.625133  403525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:17:25.639603  403525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:17:25.649980  403525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:17:25.766857  403525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:17:25.903790  403525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:17:25.903893  403525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:17:25.908981  403525 start.go:563] Will wait 60s for crictl version
	I0731 18:17:25.909074  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:17:25.913349  403525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:17:25.954241  403525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:17:25.954384  403525 ssh_runner.go:195] Run: crio --version
	I0731 18:17:25.983832  403525 ssh_runner.go:195] Run: crio --version
	I0731 18:17:26.013808  403525 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:17:26.015025  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:26.018094  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:26.018446  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:26.018472  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:26.018734  403525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:17:26.023287  403525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:17:26.036267  403525 kubeadm.go:883] updating cluster {Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:17:26.036438  403525 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:17:26.036496  403525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:17:26.070571  403525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:17:26.070686  403525 ssh_runner.go:195] Run: which lz4
	I0731 18:17:26.075110  403525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:17:26.079436  403525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:17:26.079470  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:17:27.416664  403525 crio.go:462] duration metric: took 1.341590497s to copy over tarball
	I0731 18:17:27.416745  403525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:17:29.704086  403525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.287304873s)
	I0731 18:17:29.704129  403525 crio.go:469] duration metric: took 2.287433068s to extract the tarball
	I0731 18:17:29.704141  403525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:17:29.742365  403525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:17:29.786765  403525 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:17:29.786791  403525 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:17:29.786800  403525 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.30.3 crio true true} ...
	I0731 18:17:29.786973  403525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-469211 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:17:29.787063  403525 ssh_runner.go:195] Run: crio config
	I0731 18:17:29.832104  403525 cni.go:84] Creating CNI manager for ""
	I0731 18:17:29.832129  403525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:17:29.832152  403525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:17:29.832175  403525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-469211 NodeName:addons-469211 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:17:29.832331  403525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-469211"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:17:29.832442  403525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:17:29.842793  403525 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:17:29.842918  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:17:29.852751  403525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 18:17:29.869448  403525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:17:29.886357  403525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 18:17:29.903459  403525 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0731 18:17:29.907575  403525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:17:29.921718  403525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:17:30.055968  403525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:17:30.083106  403525 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211 for IP: 192.168.39.187
	I0731 18:17:30.083136  403525 certs.go:194] generating shared ca certs ...
	I0731 18:17:30.083159  403525 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.083353  403525 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:17:30.174620  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt ...
	I0731 18:17:30.174653  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt: {Name:mk708a6cde81dea79b45116658d3ff1bc40d565c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.174821  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key ...
	I0731 18:17:30.174832  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key: {Name:mka0b6105bb80f7ef14e64fd9743c2f620c475d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.174907  403525 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:17:30.242518  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt ...
	I0731 18:17:30.242549  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt: {Name:mk43214b6f02650cbebf8422c755c00b188077ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.242712  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key ...
	I0731 18:17:30.242722  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key: {Name:mk8dd5a5172815e6b1d2fd70a7a880625c4287a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.242791  403525 certs.go:256] generating profile certs ...
	I0731 18:17:30.242907  403525 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.key
	I0731 18:17:30.242923  403525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt with IP's: []
	I0731 18:17:30.432606  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt ...
	I0731 18:17:30.432648  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: {Name:mk11b3ded6f747bee8843390ec5f205bc4e0af1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.432847  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.key ...
	I0731 18:17:30.432860  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.key: {Name:mkd8065150e0b6b0d8b07ceca4d4ab2de2142b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.432948  403525 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d
	I0731 18:17:30.432969  403525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187]
	I0731 18:17:30.556658  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d ...
	I0731 18:17:30.556695  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d: {Name:mk06c796e440bd2a0d06b4f549d2107dbdee4829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.556889  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d ...
	I0731 18:17:30.556905  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d: {Name:mk20996de13e3c0b2ca71f44ce2cb2586353edaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.556984  403525 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt
	I0731 18:17:30.557062  403525 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key
	I0731 18:17:30.557114  403525 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key
	I0731 18:17:30.557167  403525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt with IP's: []
	I0731 18:17:30.768352  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt ...
	I0731 18:17:30.768401  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt: {Name:mka013489cf097b934dd44f9e58f88346af08b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.768597  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key ...
	I0731 18:17:30.768614  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key: {Name:mkb51ce469763d417db85296e1ba2b76097f6efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.768805  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:17:30.768845  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:17:30.768872  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:17:30.768899  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:17:30.769574  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:17:30.805498  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:17:30.849106  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:17:30.878265  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:17:30.902555  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 18:17:30.928658  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:17:30.952770  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:17:30.976774  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:17:31.000983  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:17:31.024988  403525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:17:31.042033  403525 ssh_runner.go:195] Run: openssl version
	I0731 18:17:31.048775  403525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:17:31.060012  403525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:17:31.064644  403525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:17:31.064729  403525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:17:31.070691  403525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:17:31.082265  403525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:17:31.086488  403525 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:17:31.086582  403525 kubeadm.go:392] StartCluster: {Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:17:31.086665  403525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:17:31.086706  403525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:17:31.124907  403525 cri.go:89] found id: ""
	I0731 18:17:31.124997  403525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:17:31.135123  403525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:17:31.144992  403525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:17:31.154689  403525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:17:31.154707  403525 kubeadm.go:157] found existing configuration files:
	
	I0731 18:17:31.154752  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:17:31.163822  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:17:31.163969  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:17:31.174090  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:17:31.183495  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:17:31.183552  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:17:31.193244  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:17:31.202228  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:17:31.202282  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:17:31.211935  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:17:31.221121  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:17:31.221171  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:17:31.230754  403525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:17:31.295302  403525 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:17:31.295927  403525 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:17:31.445698  403525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:17:31.445792  403525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:17:31.445876  403525 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:17:31.655085  403525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:17:31.798689  403525 out.go:204]   - Generating certificates and keys ...
	I0731 18:17:31.798822  403525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:17:31.798940  403525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:17:31.940458  403525 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 18:17:32.189616  403525 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 18:17:32.397725  403525 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 18:17:32.557690  403525 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 18:17:32.642631  403525 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 18:17:32.642800  403525 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-469211 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0731 18:17:32.756548  403525 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 18:17:32.756778  403525 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-469211 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0731 18:17:32.880514  403525 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 18:17:33.145182  403525 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 18:17:33.383751  403525 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 18:17:33.384050  403525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:17:33.619447  403525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:17:33.685479  403525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:17:33.835984  403525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:17:34.108804  403525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:17:34.212350  403525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:17:34.214162  403525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:17:34.217122  403525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:17:34.219052  403525 out.go:204]   - Booting up control plane ...
	I0731 18:17:34.219185  403525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:17:34.219258  403525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:17:34.219355  403525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:17:34.235090  403525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:17:34.237164  403525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:17:34.237428  403525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:17:34.362564  403525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:17:34.362657  403525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:17:34.862904  403525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56592ms
	I0731 18:17:34.863002  403525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:17:39.863307  403525 kubeadm.go:310] [api-check] The API server is healthy after 5.001685671s
	I0731 18:17:39.874044  403525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:17:39.897593  403525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:17:39.936622  403525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:17:39.936830  403525 kubeadm.go:310] [mark-control-plane] Marking the node addons-469211 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:17:39.950364  403525 kubeadm.go:310] [bootstrap-token] Using token: i5tlvs.bruakb7fr5op4n2g
	I0731 18:17:39.951744  403525 out.go:204]   - Configuring RBAC rules ...
	I0731 18:17:39.951866  403525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:17:39.961418  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:17:39.972463  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:17:39.982890  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:17:39.994865  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:17:40.010766  403525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:17:40.268746  403525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:17:40.716132  403525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:17:41.272902  403525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:17:41.273790  403525 kubeadm.go:310] 
	I0731 18:17:41.273858  403525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:17:41.273896  403525 kubeadm.go:310] 
	I0731 18:17:41.274039  403525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:17:41.274060  403525 kubeadm.go:310] 
	I0731 18:17:41.274110  403525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:17:41.274202  403525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:17:41.274293  403525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:17:41.274305  403525 kubeadm.go:310] 
	I0731 18:17:41.274386  403525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:17:41.274398  403525 kubeadm.go:310] 
	I0731 18:17:41.274458  403525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:17:41.274466  403525 kubeadm.go:310] 
	I0731 18:17:41.274534  403525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:17:41.274620  403525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:17:41.274718  403525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:17:41.274728  403525 kubeadm.go:310] 
	I0731 18:17:41.274830  403525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:17:41.274942  403525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:17:41.274951  403525 kubeadm.go:310] 
	I0731 18:17:41.275048  403525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i5tlvs.bruakb7fr5op4n2g \
	I0731 18:17:41.275174  403525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd \
	I0731 18:17:41.275226  403525 kubeadm.go:310] 	--control-plane 
	I0731 18:17:41.275235  403525 kubeadm.go:310] 
	I0731 18:17:41.275358  403525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:17:41.275366  403525 kubeadm.go:310] 
	I0731 18:17:41.275484  403525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i5tlvs.bruakb7fr5op4n2g \
	I0731 18:17:41.275615  403525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd 
	I0731 18:17:41.276087  403525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:17:41.276122  403525 cni.go:84] Creating CNI manager for ""
	I0731 18:17:41.276140  403525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:17:41.277953  403525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:17:41.279741  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:17:41.292101  403525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:17:41.311205  403525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:17:41.311347  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:41.311397  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-469211 minikube.k8s.io/updated_at=2024_07_31T18_17_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=addons-469211 minikube.k8s.io/primary=true
	I0731 18:17:41.344769  403525 ops.go:34] apiserver oom_adj: -16
	I0731 18:17:41.455531  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:41.955811  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:42.456250  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:42.955987  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:43.455945  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:43.955575  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:44.456586  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:44.956339  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:45.456492  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:45.956630  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:46.455849  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:46.956227  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:47.455778  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:47.955635  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:48.455651  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:48.955582  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:49.456058  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:49.955648  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:50.456347  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:50.956034  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:51.455684  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:51.955959  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:52.455959  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:52.956154  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:53.455969  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:53.956513  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:54.456366  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:54.540406  403525 kubeadm.go:1113] duration metric: took 13.229139346s to wait for elevateKubeSystemPrivileges
	I0731 18:17:54.540445  403525 kubeadm.go:394] duration metric: took 23.453870858s to StartCluster
	I0731 18:17:54.540478  403525 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:54.540617  403525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:17:54.541011  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:54.541208  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 18:17:54.541241  403525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:17:54.541313  403525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 18:17:54.541445  403525 addons.go:69] Setting yakd=true in profile "addons-469211"
	I0731 18:17:54.541492  403525 addons.go:69] Setting volcano=true in profile "addons-469211"
	I0731 18:17:54.541508  403525 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-469211"
	I0731 18:17:54.541486  403525 addons.go:69] Setting ingress=true in profile "addons-469211"
	I0731 18:17:54.541521  403525 addons.go:234] Setting addon volcano=true in "addons-469211"
	I0731 18:17:54.541523  403525 addons.go:69] Setting volumesnapshots=true in profile "addons-469211"
	I0731 18:17:54.541525  403525 addons.go:69] Setting registry=true in profile "addons-469211"
	I0731 18:17:54.541530  403525 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-469211"
	I0731 18:17:54.541538  403525 addons.go:234] Setting addon ingress=true in "addons-469211"
	I0731 18:17:54.541541  403525 addons.go:234] Setting addon volumesnapshots=true in "addons-469211"
	I0731 18:17:54.541549  403525 addons.go:234] Setting addon registry=true in "addons-469211"
	I0731 18:17:54.541568  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541568  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541575  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541494  403525 config.go:182] Loaded profile config "addons-469211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:17:54.541568  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541456  403525 addons.go:69] Setting cloud-spanner=true in profile "addons-469211"
	I0731 18:17:54.541690  403525 addons.go:234] Setting addon cloud-spanner=true in "addons-469211"
	I0731 18:17:54.541463  403525 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-469211"
	I0731 18:17:54.541713  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541742  403525 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-469211"
	I0731 18:17:54.541780  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541467  403525 addons.go:69] Setting default-storageclass=true in profile "addons-469211"
	I0731 18:17:54.541880  403525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-469211"
	I0731 18:17:54.542082  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542121  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542144  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542144  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.541450  403525 addons.go:69] Setting inspektor-gadget=true in profile "addons-469211"
	I0731 18:17:54.542160  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542174  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542183  403525 addons.go:234] Setting addon inspektor-gadget=true in "addons-469211"
	I0731 18:17:54.542187  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542206  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541479  403525 addons.go:69] Setting ingress-dns=true in profile "addons-469211"
	I0731 18:17:54.542177  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542249  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542259  403525 addons.go:234] Setting addon ingress-dns=true in "addons-469211"
	I0731 18:17:54.541514  403525 addons.go:234] Setting addon yakd=true in "addons-469211"
	I0731 18:17:54.541448  403525 addons.go:69] Setting gcp-auth=true in profile "addons-469211"
	I0731 18:17:54.542271  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.541481  403525 addons.go:69] Setting storage-provisioner=true in profile "addons-469211"
	I0731 18:17:54.542286  403525 mustload.go:65] Loading cluster: addons-469211
	I0731 18:17:54.541516  403525 addons.go:69] Setting metrics-server=true in profile "addons-469211"
	I0731 18:17:54.542312  403525 addons.go:234] Setting addon metrics-server=true in "addons-469211"
	I0731 18:17:54.542252  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.541487  403525 addons.go:69] Setting helm-tiller=true in profile "addons-469211"
	I0731 18:17:54.542338  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542357  403525 addons.go:234] Setting addon helm-tiller=true in "addons-469211"
	I0731 18:17:54.542314  403525 addons.go:234] Setting addon storage-provisioner=true in "addons-469211"
	I0731 18:17:54.541570  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541510  403525 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-469211"
	I0731 18:17:54.542500  403525 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-469211"
	I0731 18:17:54.542555  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542563  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542570  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542580  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542599  403525 config.go:182] Loaded profile config "addons-469211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:17:54.542681  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542689  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542698  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542747  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542844  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542869  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542898  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542939  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542964  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542988  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.543017  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.543033  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.543105  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.543119  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.543196  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.550666  403525 out.go:177] * Verifying Kubernetes components...
	I0731 18:17:54.552320  403525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:17:54.558127  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44897
	I0731 18:17:54.558720  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.559194  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0731 18:17:54.559305  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.559327  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.559549  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.559729  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.560340  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.560385  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.560705  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.560721  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.561137  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.561717  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.561758  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.561938  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
	I0731 18:17:54.564910  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.564939  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.564956  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.564999  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.565023  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0731 18:17:54.569160  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.569214  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.569705  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.570176  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.570460  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.570485  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.570745  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.570763  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.570851  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.571406  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.571430  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.571603  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.572165  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.572210  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.589536  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0731 18:17:54.590071  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.590611  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.590633  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.590952  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.591163  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.591742  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0731 18:17:54.592236  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.592804  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.592821  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.595297  403525 addons.go:234] Setting addon default-storageclass=true in "addons-469211"
	I0731 18:17:54.595343  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.595720  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.595752  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.596154  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.596730  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.596772  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.600711  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0731 18:17:54.601242  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.601779  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.601800  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.601819  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0731 18:17:54.602133  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.602257  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.602876  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.602917  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.603528  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.603557  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.603972  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.604218  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.606223  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.606621  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.606655  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.610819  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0731 18:17:54.611240  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.611861  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.611882  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.612322  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.612590  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.614204  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0731 18:17:54.614408  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0731 18:17:54.614798  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.614923  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.615451  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.615469  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.615603  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.615613  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.615791  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.616176  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.616222  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.617077  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.617128  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.617668  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.617698  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.619084  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 18:17:54.620602  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 18:17:54.621977  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 18:17:54.622594  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:17:54.623132  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.623558  403525 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 18:17:54.623581  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 18:17:54.623602  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.623724  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.623743  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.625123  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.625727  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.625767  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.626554  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0731 18:17:54.628845  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0731 18:17:54.628981  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0731 18:17:54.629077  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.629137  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.629167  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.629185  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.629300  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.629558  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.629626  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.629712  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.630114  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.630129  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.630416  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0731 18:17:54.630578  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.631035  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.631482  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.631501  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.632138  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.632170  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.632593  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.632808  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.633718  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.633921  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.634445  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.634462  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.634519  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.635114  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.635133  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.635641  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.636101  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.636551  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 18:17:54.637230  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.637777  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 18:17:54.637798  403525 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 18:17:54.637818  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.638363  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.638408  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.639264  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41271
	I0731 18:17:54.639815  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.640028  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.640442  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:17:54.640466  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:17:54.642676  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.642703  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.642732  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.642748  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0731 18:17:54.642750  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.642770  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:17:54.642797  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:17:54.642805  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:17:54.642814  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:17:54.642821  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:17:54.643130  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:17:54.643161  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.643205  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:17:54.643213  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 18:17:54.643316  403525 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 18:17:54.643484  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.643652  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0731 18:17:54.643884  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.643899  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.644065  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.644073  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.644080  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.644763  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.645067  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.645157  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.645255  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.645489  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.645778  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.645795  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.646712  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.647013  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.647043  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.647239  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.647436  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.648910  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.649454  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 18:17:54.650628  403525 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 18:17:54.652105  403525 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 18:17:54.652128  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 18:17:54.652163  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.652239  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 18:17:54.653340  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0731 18:17:54.653547  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45137
	I0731 18:17:54.654008  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.654108  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.654931  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 18:17:54.654955  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.654973  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.655120  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.655132  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.655558  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0731 18:17:54.655574  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.655623  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.655945  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.656572  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.656590  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.657261  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.657307  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.657524  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.658243  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.658457  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.658696  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.658914  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0731 18:17:54.658953  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 18:17:54.659301  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.659324  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.659638  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.659723  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.659797  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.659938  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.660100  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.660549  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.660563  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.661042  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.661324  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.661486  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 18:17:54.662699  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 18:17:54.663693  403525 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-469211"
	I0731 18:17:54.663737  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.664091  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.664132  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.664353  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.664872  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 18:17:54.665341  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.665907  403525 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 18:17:54.666655  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0731 18:17:54.666913  403525 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 18:17:54.666919  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 18:17:54.666943  403525 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 18:17:54.666947  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 18:17:54.666964  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.668341  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 18:17:54.668359  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 18:17:54.668484  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.669954  403525 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 18:17:54.670414  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0731 18:17:54.671113  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.671564  403525 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 18:17:54.671581  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 18:17:54.671599  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.671727  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.671880  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.671894  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.672398  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.673075  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.673118  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.673405  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.674005  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.674022  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.674280  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.674407  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.674710  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.674731  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.674776  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.674958  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.675166  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.675235  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.675414  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.675646  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.675677  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.675698  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.675901  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.676211  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.676407  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.676431  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.676442  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.676566  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.676735  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.676753  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.676981  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.677218  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.677460  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.678842  403525 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 18:17:54.680004  403525 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 18:17:54.680017  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 18:17:54.680030  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.683032  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.683641  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.683675  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.683891  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.684050  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.684240  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.684359  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.685573  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36973
	I0731 18:17:54.686143  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.686718  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.686735  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.687153  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.687376  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.688125  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0731 18:17:54.688600  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.689131  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.689149  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.689504  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.689708  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.689774  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.690341  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0731 18:17:54.690662  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.691159  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.691181  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.691514  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.691705  403525 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 18:17:54.691713  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.692982  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:17:54.693000  403525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:17:54.693020  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.694055  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0731 18:17:54.694092  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.694314  403525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:17:54.694331  403525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:17:54.694347  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.694407  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.694866  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.695009  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.695039  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.695726  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.695969  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.698018  403525 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 18:17:54.698703  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0731 18:17:54.698778  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.698957  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.699125  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.699323  403525 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 18:17:54.699342  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 18:17:54.699354  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.699360  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.699563  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.699593  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.699733  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.699863  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.700076  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.700353  403525 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 18:17:54.700640  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.700665  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.700807  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.701033  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.701319  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.701334  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.701419  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.701552  403525 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 18:17:54.701549  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.701564  403525 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 18:17:54.701589  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.701808  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.702049  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.702668  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.705090  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705110  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705324  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
	I0731 18:17:54.705538  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.705564  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705749  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.705900  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.705920  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705961  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.706052  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.706068  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.706394  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.706437  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.706467  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.706482  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.706579  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.706700  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.706708  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.706890  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.707020  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.707727  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0731 18:17:54.708179  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.708661  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.708705  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.708719  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.708743  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40309
	I0731 18:17:54.709073  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.709350  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.709371  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.709841  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.709872  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.710302  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.710541  403525 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 18:17:54.711062  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.711098  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.711437  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.711795  403525 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 18:17:54.711817  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 18:17:54.711835  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.713024  403525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:17:54.714430  403525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:17:54.714451  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:17:54.714470  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.714539  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.714998  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.715020  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.715159  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.715343  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.715475  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.715611  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.717044  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.717398  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.717449  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.717602  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.717751  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.717966  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.718128  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	W0731 18:17:54.729949  403525 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43180->192.168.39.187:22: read: connection reset by peer
	I0731 18:17:54.729979  403525 retry.go:31] will retry after 125.107357ms: ssh: handshake failed: read tcp 192.168.39.1:43180->192.168.39.187:22: read: connection reset by peer
	I0731 18:17:54.744514  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0731 18:17:54.745067  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.745610  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.745629  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.745911  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.746131  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.747732  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.749684  403525 out.go:177]   - Using image docker.io/busybox:stable
	I0731 18:17:54.751136  403525 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 18:17:54.752694  403525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 18:17:54.752712  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 18:17:54.752735  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.755736  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.756230  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.756263  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.756528  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.756754  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.756924  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.757124  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.994401  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 18:17:55.051797  403525 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 18:17:55.051838  403525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 18:17:55.069661  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 18:17:55.078859  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:17:55.078884  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 18:17:55.146885  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:17:55.146919  403525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:17:55.214417  403525 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 18:17:55.214445  403525 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 18:17:55.235318  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:17:55.252043  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:17:55.252083  403525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:17:55.293004  403525 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 18:17:55.293039  403525 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 18:17:55.294405  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 18:17:55.300271  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 18:17:55.312649  403525 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 18:17:55.312676  403525 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 18:17:55.352285  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:17:55.372808  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 18:17:55.372849  403525 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 18:17:55.378691  403525 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 18:17:55.378722  403525 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 18:17:55.380002  403525 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 18:17:55.380022  403525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 18:17:55.453526  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 18:17:55.453567  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 18:17:55.460502  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:17:55.498328  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 18:17:55.498357  403525 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 18:17:55.511320  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 18:17:55.535486  403525 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 18:17:55.535513  403525 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 18:17:55.556542  403525 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 18:17:55.556571  403525 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 18:17:55.610890  403525 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 18:17:55.610911  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 18:17:55.630872  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 18:17:55.630901  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 18:17:55.636638  403525 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 18:17:55.636669  403525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 18:17:55.642310  403525 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.089957062s)
	I0731 18:17:55.642329  403525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.10108413s)
	I0731 18:17:55.642384  403525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:17:55.642478  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 18:17:55.801921  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 18:17:55.801956  403525 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 18:17:55.845719  403525 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 18:17:55.845749  403525 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 18:17:55.890515  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 18:17:55.892563  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 18:17:55.892593  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 18:17:56.008807  403525 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 18:17:56.008951  403525 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 18:17:56.014333  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 18:17:56.014362  403525 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 18:17:56.031342  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 18:17:56.038388  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 18:17:56.038415  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 18:17:56.103015  403525 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 18:17:56.103042  403525 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 18:17:56.151491  403525 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 18:17:56.151513  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 18:17:56.213402  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 18:17:56.358794  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 18:17:56.358832  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 18:17:56.459741  403525 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 18:17:56.459773  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 18:17:56.493582  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 18:17:56.770418  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 18:17:56.780669  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 18:17:56.780703  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 18:17:56.966527  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 18:17:56.966564  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 18:17:57.410036  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 18:17:57.410070  403525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 18:17:57.853552  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 18:17:57.853626  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 18:17:58.128245  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 18:17:58.128277  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 18:17:58.434289  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 18:17:58.434320  403525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 18:17:58.751688  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 18:18:01.720288  403525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 18:18:01.720342  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:18:01.723616  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:01.724047  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:18:01.724083  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:01.724245  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:18:01.724478  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:18:01.724652  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:18:01.724878  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:18:01.992697  403525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 18:18:02.126711  403525 addons.go:234] Setting addon gcp-auth=true in "addons-469211"
	I0731 18:18:02.126796  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:18:02.127267  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:18:02.127311  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:18:02.143455  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0731 18:18:02.144011  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:18:02.144590  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:18:02.144610  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:18:02.145047  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:18:02.145597  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:18:02.145627  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:18:02.161990  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0731 18:18:02.162489  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:18:02.163176  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:18:02.163201  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:18:02.163587  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:18:02.163849  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:18:02.165501  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:18:02.165767  403525 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 18:18:02.165799  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:18:02.168800  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:02.169275  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:18:02.169306  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:02.169549  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:18:02.169742  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:18:02.169929  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:18:02.170074  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:18:02.783909  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.789464771s)
	I0731 18:18:02.783974  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.714274852s)
	I0731 18:18:02.783988  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784002  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784019  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784033  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784038  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.548680683s)
	I0731 18:18:02.784076  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784100  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784115  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.489678106s)
	I0731 18:18:02.784156  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784166  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784204  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.483894101s)
	I0731 18:18:02.784234  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784247  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784281  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.431963868s)
	I0731 18:18:02.784297  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784307  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784318  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.323796491s)
	I0731 18:18:02.784333  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784341  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784398  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.273055005s)
	I0731 18:18:02.784414  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784423  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784455  403525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.141954039s)
	I0731 18:18:02.784472  403525 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.142072332s)
	I0731 18:18:02.784479  403525 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 18:18:02.784707  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784722  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784735  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784739  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784744  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784753  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784751  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784775  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784785  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784793  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784795  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784801  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784806  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784810  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784819  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784884  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784913  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784921  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784931  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784939  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784994  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785019  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785030  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785049  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785058  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785109  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785164  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785172  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785243  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785287  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785301  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785369  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.894809588s)
	I0731 18:18:02.785391  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785401  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785473  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.754104733s)
	I0731 18:18:02.785487  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785494  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785556  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.57212321s)
	I0731 18:18:02.785568  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785576  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785702  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.29208185s)
	W0731 18:18:02.785730  403525 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 18:18:02.785750  403525 retry.go:31] will retry after 370.884373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 18:18:02.785824  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.015371478s)
	I0731 18:18:02.785840  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785848  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785917  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785920  403525 node_ready.go:35] waiting up to 6m0s for node "addons-469211" to be "Ready" ...
	I0731 18:18:02.785939  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785945  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785952  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785958  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785995  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.786013  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.786019  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.786026  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.786032  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.786065  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.786083  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.786089  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787518  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787547  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787558  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.787567  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.787662  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.787683  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.787690  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787698  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787708  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787716  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787894  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.787923  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787932  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787941  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.787950  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.788007  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.788081  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.788091  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.788100  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.788108  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.788164  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.788190  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.788198  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.788208  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.788216  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.788268  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.788290  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.788298  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.788308  403525 addons.go:475] Verifying addon ingress=true in "addons-469211"
	I0731 18:18:02.789772  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.789795  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.789800  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.789811  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.789819  403525 addons.go:475] Verifying addon metrics-server=true in "addons-469211"
	I0731 18:18:02.789827  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.789834  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.789891  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.789933  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.789939  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.791991  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.792022  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.792029  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.792255  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.792267  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.792275  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.792285  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.792389  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.792440  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.792456  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.792467  403525 addons.go:475] Verifying addon registry=true in "addons-469211"
	I0731 18:18:02.792615  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.792639  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.793190  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.793216  403525 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-469211 service yakd-dashboard -n yakd-dashboard
	
	I0731 18:18:02.793321  403525 out.go:177] * Verifying ingress addon...
	I0731 18:18:02.794357  403525 out.go:177] * Verifying registry addon...
	I0731 18:18:02.796229  403525 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 18:18:02.796838  403525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 18:18:02.811133  403525 node_ready.go:49] node "addons-469211" has status "Ready":"True"
	I0731 18:18:02.811163  403525 node_ready.go:38] duration metric: took 25.224048ms for node "addons-469211" to be "Ready" ...
	I0731 18:18:02.811177  403525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:18:02.847144  403525 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 18:18:02.847188  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:02.877381  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.877412  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.877815  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.877836  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 18:18:02.877948  403525 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 18:18:02.897143  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.897174  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.897525  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.897553  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.897557  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.922855  403525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kh5dt" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:02.942021  403525 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 18:18:02.942046  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:02.984340  403525 pod_ready.go:92] pod "coredns-7db6d8ff4d-kh5dt" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:02.984408  403525 pod_ready.go:81] duration metric: took 61.520217ms for pod "coredns-7db6d8ff4d-kh5dt" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:02.984426  403525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zc9fz" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.143063  403525 pod_ready.go:92] pod "coredns-7db6d8ff4d-zc9fz" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.143108  403525 pod_ready.go:81] duration metric: took 158.671617ms for pod "coredns-7db6d8ff4d-zc9fz" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.143124  403525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.156807  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 18:18:03.266537  403525 pod_ready.go:92] pod "etcd-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.266568  403525 pod_ready.go:81] duration metric: took 123.435127ms for pod "etcd-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.266582  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.294773  403525 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-469211" context rescaled to 1 replicas
	I0731 18:18:03.303200  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:03.308753  403525 pod_ready.go:92] pod "kube-apiserver-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.308779  403525 pod_ready.go:81] duration metric: took 42.188541ms for pod "kube-apiserver-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.308791  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.309669  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:03.319617  403525 pod_ready.go:92] pod "kube-controller-manager-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.319652  403525 pod_ready.go:81] duration metric: took 10.85165ms for pod "kube-controller-manager-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.319671  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmpj2" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.593707  403525 pod_ready.go:92] pod "kube-proxy-rmpj2" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.593743  403525 pod_ready.go:81] duration metric: took 274.062498ms for pod "kube-proxy-rmpj2" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.593757  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.803381  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:03.808702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:04.000200  403525 pod_ready.go:92] pod "kube-scheduler-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:04.000224  403525 pod_ready.go:81] duration metric: took 406.459784ms for pod "kube-scheduler-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:04.000236  403525 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:04.311093  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:04.311240  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:04.359937  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.608188548s)
	I0731 18:18:04.359982  403525 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.194191445s)
	I0731 18:18:04.360004  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:04.360022  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:04.360452  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:04.360486  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:04.360501  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:04.360508  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:04.360516  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:04.360785  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:04.360810  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:04.360823  403525 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-469211"
	I0731 18:18:04.362613  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 18:18:04.362627  403525 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 18:18:04.364156  403525 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 18:18:04.364880  403525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 18:18:04.365363  403525 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 18:18:04.365381  403525 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 18:18:04.399530  403525 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 18:18:04.399551  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:04.517948  403525 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 18:18:04.517973  403525 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 18:18:04.610990  403525 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 18:18:04.611013  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 18:18:04.774078  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 18:18:04.806616  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:04.806702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:04.871477  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:05.303318  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:05.303596  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:05.370303  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:05.801860  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:05.804914  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:05.871012  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:05.874887  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.718033509s)
	I0731 18:18:05.874950  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:05.874966  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:05.875291  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:05.875318  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:05.875331  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:05.875341  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:05.875584  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:05.875618  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:06.015435  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:06.301985  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:06.303429  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:06.371562  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:06.684627  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.910505894s)
	I0731 18:18:06.684696  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:06.684709  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:06.685054  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:06.685093  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:06.685120  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:06.685149  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:06.685161  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:06.685396  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:06.685420  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:06.685430  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:06.687632  403525 addons.go:475] Verifying addon gcp-auth=true in "addons-469211"
	I0731 18:18:06.689554  403525 out.go:177] * Verifying gcp-auth addon...
	I0731 18:18:06.692053  403525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 18:18:06.714301  403525 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 18:18:06.714324  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:06.808486  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:06.809025  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:06.871163  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:07.195316  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:07.302004  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:07.302243  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:07.371865  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:07.698872  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:07.801521  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:07.803788  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:07.874336  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:08.206125  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:08.307036  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:08.312053  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:08.417539  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:08.528502  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:08.700020  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:08.808781  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:08.809722  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:08.871711  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:09.195745  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:09.301693  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:09.301864  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:09.371229  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:09.697711  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:09.802572  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:09.803016  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:09.872442  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:10.195911  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:10.301757  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:10.302243  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:10.373340  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:10.696439  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:10.800065  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:10.801649  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:10.872723  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:11.011770  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:11.195907  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:11.303507  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:11.304163  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:11.370491  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:11.696700  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:11.800598  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:11.803390  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:11.871291  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:12.196621  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:12.301092  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:12.303114  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:12.371306  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:12.696117  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:12.802787  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:12.802881  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:12.870987  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:13.198334  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:13.302253  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:13.303927  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:13.638943  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:13.639146  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:13.697681  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:13.802184  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:13.802494  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:13.871760  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:14.196474  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:14.302051  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:14.302180  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:14.383723  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:14.695741  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:14.802232  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:14.802342  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:14.872676  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:15.195750  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:15.301295  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:15.302132  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:15.372205  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:15.696219  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:15.802541  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:15.804245  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:15.873300  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:16.005713  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:16.196251  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:16.302854  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:16.304140  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:16.370725  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:16.696841  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:16.802663  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:16.802716  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:16.872844  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:17.196484  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:17.301931  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:17.304621  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:17.370253  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:17.696024  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:17.801431  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:17.802251  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:17.870606  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:18.006534  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:18.197066  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:18.301261  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:18.303123  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:18.371432  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:18.695712  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:18.801751  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:18.802188  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:18.871914  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:19.255640  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:19.302007  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:19.303910  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:19.371321  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:19.696725  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:19.801106  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:19.802564  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:19.870836  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:20.198108  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:20.302364  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:20.302477  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:20.370718  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:20.508341  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:20.696554  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:20.802749  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:20.802891  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:20.870847  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:21.196342  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:21.302074  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:21.313358  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:21.370074  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:21.698356  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:21.807700  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:21.814238  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:21.870965  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:22.196216  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:22.302051  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:22.303717  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:22.370808  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:22.573250  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:22.966437  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:22.967259  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:22.967363  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:22.969702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:23.196542  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:23.300786  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:23.302014  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:23.370734  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:23.699406  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:23.801988  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:23.803983  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:23.870804  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:24.196530  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:24.302786  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:24.303229  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:24.370300  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:24.696508  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:24.801529  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:24.801599  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:24.870052  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:25.007007  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:25.198429  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:25.303069  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:25.303151  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:25.370763  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:25.695871  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:25.801506  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:25.802153  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:25.874000  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:26.196081  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:26.303440  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:26.304176  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:26.371128  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:26.696013  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:26.801007  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:26.801303  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:26.870626  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:27.007808  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:27.197946  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:27.305692  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:27.306175  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:27.371149  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:27.695657  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:27.800549  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:27.802492  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:27.871163  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:28.195525  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:28.301354  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:28.302417  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:28.370788  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:28.695962  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:28.801261  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:28.803207  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:28.872164  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:29.007844  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:29.196653  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:29.301888  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:29.302606  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:29.371346  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:29.698352  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:29.801435  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:29.802617  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:29.871059  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:30.197381  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:30.304408  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:30.304431  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:30.371102  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:30.696420  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:30.800432  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:30.801814  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:30.870897  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:31.198366  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:31.301838  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:31.303007  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:31.370833  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:31.804460  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:31.807654  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:31.812069  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:31.814627  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:31.871186  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:32.196904  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:32.302278  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:32.302700  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:32.371124  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:32.696802  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:32.802502  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:32.802817  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:32.871238  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:33.196769  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:33.301896  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:33.301991  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:33.372801  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:33.696233  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:33.801286  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:33.802663  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:33.871541  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:34.011018  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:34.196432  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:34.302680  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:34.304795  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:34.371857  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:34.695986  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:34.802919  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:34.803550  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:34.876451  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:35.196863  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:35.302581  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:35.305155  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:35.370858  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:35.696030  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:35.802514  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:35.804332  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:36.319148  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:36.320150  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:36.320423  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:36.322803  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:36.324487  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:36.370699  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:36.695998  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:36.801740  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:36.801804  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:36.870299  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:37.195781  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:37.300760  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:37.302395  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:37.370488  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:37.696634  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:37.800603  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:37.802534  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:37.870994  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:38.196917  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:38.302292  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:38.302425  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:38.373696  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:38.506973  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:38.696810  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:38.801660  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:38.801816  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:38.871142  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:39.196165  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:39.302091  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:39.302477  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:39.370438  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:39.697625  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:39.810231  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:39.810632  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:39.871311  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:40.196752  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:40.301059  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:40.302009  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:40.371313  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:40.696556  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:40.800761  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:40.801648  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:40.870715  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:41.006973  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:41.199009  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:41.301291  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:41.302282  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:41.371714  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:41.696389  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:41.803589  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:41.817440  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:41.883662  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:42.197121  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:42.301678  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:42.302690  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:42.371312  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:42.696115  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:42.801174  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:42.801326  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:42.869907  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:43.197281  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:43.301437  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:43.301591  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:43.370540  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:43.506096  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:43.696417  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:43.802335  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:43.802440  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:43.869863  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:44.198299  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:44.301634  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:44.301776  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:44.372287  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:44.928295  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:44.928337  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:44.928356  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:44.928868  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:45.195965  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:45.302300  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:45.302821  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:45.371946  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:45.506735  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:45.696923  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:45.801573  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:45.802826  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:45.870319  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:46.196550  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:46.300204  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:46.301683  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:46.370012  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:46.696863  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:46.800514  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:46.801203  403525 kapi.go:107] duration metric: took 44.004362848s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 18:18:46.871148  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:47.195891  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:47.301098  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:47.371404  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:47.698113  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:47.800940  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:47.870834  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:48.006640  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:48.196924  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:48.301707  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:48.371241  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:48.698370  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:48.801525  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:48.870926  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:49.197015  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:49.300712  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:49.370819  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:49.695963  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:49.801659  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:49.872409  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:50.199326  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:50.301586  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:50.371150  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:50.505746  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:50.696091  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:50.801177  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:50.870702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:51.198757  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:51.300905  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:51.371594  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:51.697447  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:51.801360  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:51.870775  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:52.196773  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:52.300476  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:52.371257  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:52.506616  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:52.696216  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:52.801633  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:52.869714  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:53.195966  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:53.301010  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:53.371766  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:53.695987  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:53.801367  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:53.870593  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:54.196586  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:54.300246  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:54.370918  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:54.887311  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:54.887830  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:54.888408  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:54.888524  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:55.197708  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:55.301704  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:55.372745  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:55.698308  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:55.801384  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:55.871125  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:56.196319  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:56.301427  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:56.371882  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:56.697253  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:56.801071  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:56.870230  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:57.006494  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:57.198055  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:57.300777  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:57.370590  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:57.697513  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:57.800429  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:57.870214  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:58.195986  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:58.301499  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:58.371318  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:58.696829  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:58.801881  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:58.871514  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:59.007358  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:59.195792  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:59.300415  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:59.378492  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:59.695553  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:59.800655  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:59.870158  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:00.196347  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:00.302038  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:00.377833  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:00.697585  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:00.800974  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:00.871215  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:01.011863  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:01.200324  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:01.301490  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:01.370987  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:01.698141  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:01.801329  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:01.871443  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:02.197560  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:02.300541  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:02.373272  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:02.704583  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:02.800881  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:02.874099  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:03.197505  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:03.302030  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:03.370659  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:03.506741  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:03.696527  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:03.800413  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:03.871528  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:04.196272  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:04.301375  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:04.370791  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:04.696681  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:04.801990  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:04.871419  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:05.195889  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:05.301551  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:05.371522  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:05.512956  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:05.696556  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:05.800667  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:05.871163  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:06.197610  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:06.300393  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:06.371255  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:06.695961  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:06.801410  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:06.871072  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:07.197448  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:07.302010  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:07.370256  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:07.696291  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:07.801208  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:07.871701  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:08.006619  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:08.195718  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:08.300958  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:08.371092  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:08.695862  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:08.801540  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:08.871685  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:09.196725  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:09.301146  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:09.371814  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:09.799614  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:09.802397  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:09.876685  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:10.006711  403525 pod_ready.go:92] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"True"
	I0731 18:19:10.006733  403525 pod_ready.go:81] duration metric: took 1m6.00649097s for pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace to be "Ready" ...
	I0731 18:19:10.006744  403525 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rnrgk" in "kube-system" namespace to be "Ready" ...
	I0731 18:19:10.011719  403525 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-rnrgk" in "kube-system" namespace has status "Ready":"True"
	I0731 18:19:10.011742  403525 pod_ready.go:81] duration metric: took 4.992129ms for pod "nvidia-device-plugin-daemonset-rnrgk" in "kube-system" namespace to be "Ready" ...
	I0731 18:19:10.011766  403525 pod_ready.go:38] duration metric: took 1m7.200575143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:19:10.011784  403525 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:19:10.011887  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:19:10.011961  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:19:10.064441  403525 cri.go:89] found id: "13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:10.064474  403525 cri.go:89] found id: ""
	I0731 18:19:10.064483  403525 logs.go:276] 1 containers: [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75]
	I0731 18:19:10.064549  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.070728  403525 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:19:10.070799  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:19:10.136832  403525 cri.go:89] found id: "eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:10.136857  403525 cri.go:89] found id: ""
	I0731 18:19:10.136866  403525 logs.go:276] 1 containers: [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71]
	I0731 18:19:10.136927  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.144262  403525 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:19:10.144332  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:19:10.195695  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:10.213139  403525 cri.go:89] found id: "7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:10.213162  403525 cri.go:89] found id: ""
	I0731 18:19:10.213172  403525 logs.go:276] 1 containers: [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa]
	I0731 18:19:10.213234  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.224629  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:19:10.224720  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:19:10.279274  403525 cri.go:89] found id: "d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:10.279301  403525 cri.go:89] found id: ""
	I0731 18:19:10.279310  403525 logs.go:276] 1 containers: [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49]
	I0731 18:19:10.279371  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.284466  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:19:10.284551  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:19:10.300946  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:10.359726  403525 cri.go:89] found id: "ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:10.359755  403525 cri.go:89] found id: ""
	I0731 18:19:10.359764  403525 logs.go:276] 1 containers: [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4]
	I0731 18:19:10.359821  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.370265  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:19:10.370334  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:19:10.371717  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:10.437469  403525 cri.go:89] found id: "7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:10.437502  403525 cri.go:89] found id: ""
	I0731 18:19:10.437513  403525 logs.go:276] 1 containers: [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45]
	I0731 18:19:10.437574  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.448766  403525 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:19:10.448838  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:19:10.522724  403525 cri.go:89] found id: ""
	I0731 18:19:10.522760  403525 logs.go:276] 0 containers: []
	W0731 18:19:10.522772  403525 logs.go:278] No container was found matching "kindnet"
	I0731 18:19:10.522786  403525 logs.go:123] Gathering logs for kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] ...
	I0731 18:19:10.522802  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:10.599232  403525 logs.go:123] Gathering logs for kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] ...
	I0731 18:19:10.599266  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:10.688535  403525 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:19:10.688575  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:19:10.697920  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:10.801858  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:10.873166  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:11.138263  403525 logs.go:123] Gathering logs for kubelet ...
	I0731 18:19:11.138307  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:19:11.212524  403525 logs.go:123] Gathering logs for dmesg ...
	I0731 18:19:11.212571  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:19:11.237065  403525 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:19:11.237105  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:19:11.436983  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:11.439914  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:11.443799  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:11.512672  403525 logs.go:123] Gathering logs for kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] ...
	I0731 18:19:11.512707  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:11.628979  403525 logs.go:123] Gathering logs for coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] ...
	I0731 18:19:11.629032  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:11.696455  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:11.794828  403525 logs.go:123] Gathering logs for etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] ...
	I0731 18:19:11.794862  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:11.801385  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:11.873249  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:11.943226  403525 logs.go:123] Gathering logs for kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] ...
	I0731 18:19:11.943265  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:12.024804  403525 logs.go:123] Gathering logs for container status ...
	I0731 18:19:12.024844  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:19:12.199082  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:12.301620  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:12.370370  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:12.698409  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:12.801700  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:12.870543  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:13.195595  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:13.300301  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:13.372463  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:13.698085  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:13.801520  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:13.871010  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:14.195979  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:14.301977  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:14.373363  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:14.645589  403525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:19:14.675057  403525 api_server.go:72] duration metric: took 1m20.133774799s to wait for apiserver process to appear ...
	I0731 18:19:14.675093  403525 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:19:14.675141  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:19:14.675201  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:19:14.695695  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:14.756388  403525 cri.go:89] found id: "13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:14.756416  403525 cri.go:89] found id: ""
	I0731 18:19:14.756426  403525 logs.go:276] 1 containers: [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75]
	I0731 18:19:14.756489  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.762824  403525 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:19:14.762898  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:19:14.800889  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:14.826370  403525 cri.go:89] found id: "eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:14.826395  403525 cri.go:89] found id: ""
	I0731 18:19:14.826403  403525 logs.go:276] 1 containers: [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71]
	I0731 18:19:14.826451  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.832743  403525 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:19:14.832821  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:19:14.870687  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:14.904866  403525 cri.go:89] found id: "7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:14.904897  403525 cri.go:89] found id: ""
	I0731 18:19:14.904907  403525 logs.go:276] 1 containers: [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa]
	I0731 18:19:14.904971  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.918138  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:19:14.918226  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:19:14.969853  403525 cri.go:89] found id: "d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:14.969882  403525 cri.go:89] found id: ""
	I0731 18:19:14.969892  403525 logs.go:276] 1 containers: [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49]
	I0731 18:19:14.969956  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.974303  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:19:14.974364  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:19:15.029569  403525 cri.go:89] found id: "ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:15.029600  403525 cri.go:89] found id: ""
	I0731 18:19:15.029611  403525 logs.go:276] 1 containers: [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4]
	I0731 18:19:15.029674  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:15.035633  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:19:15.035713  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:19:15.099817  403525 cri.go:89] found id: "7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:15.099839  403525 cri.go:89] found id: ""
	I0731 18:19:15.099847  403525 logs.go:276] 1 containers: [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45]
	I0731 18:19:15.099917  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:15.104451  403525 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:19:15.104523  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:19:15.196210  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:15.203523  403525 cri.go:89] found id: ""
	I0731 18:19:15.203548  403525 logs.go:276] 0 containers: []
	W0731 18:19:15.203555  403525 logs.go:278] No container was found matching "kindnet"
	I0731 18:19:15.203564  403525 logs.go:123] Gathering logs for kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] ...
	I0731 18:19:15.203576  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:15.255713  403525 logs.go:123] Gathering logs for kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] ...
	I0731 18:19:15.255744  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:15.301413  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:15.347024  403525 logs.go:123] Gathering logs for container status ...
	I0731 18:19:15.347060  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:19:15.376208  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:15.502945  403525 logs.go:123] Gathering logs for kubelet ...
	I0731 18:19:15.502989  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:19:15.594900  403525 logs.go:123] Gathering logs for dmesg ...
	I0731 18:19:15.594938  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:19:15.648950  403525 logs.go:123] Gathering logs for etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] ...
	I0731 18:19:15.648977  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:15.699332  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:15.738140  403525 logs.go:123] Gathering logs for coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] ...
	I0731 18:19:15.738185  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:15.801674  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:15.823682  403525 logs.go:123] Gathering logs for kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] ...
	I0731 18:19:15.823725  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:15.871471  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:15.874474  403525 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:19:15.874505  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:19:16.142628  403525 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:19:16.142682  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:19:16.199759  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:16.300715  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:16.369501  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:16.466148  403525 logs.go:123] Gathering logs for kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] ...
	I0731 18:19:16.466186  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:16.696021  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:16.803728  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:16.870141  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:17.195394  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:17.301182  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:17.370975  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:17.695568  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:17.801339  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:17.869932  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:18.198852  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:18.300926  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:18.370635  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:18.695496  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:18.801891  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:18.870661  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:19.026979  403525 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I0731 18:19:19.031422  403525 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I0731 18:19:19.032293  403525 api_server.go:141] control plane version: v1.30.3
	I0731 18:19:19.032315  403525 api_server.go:131] duration metric: took 4.357214363s to wait for apiserver health ...
	I0731 18:19:19.032323  403525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:19:19.032345  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:19:19.032412  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:19:19.103705  403525 cri.go:89] found id: "13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:19.103733  403525 cri.go:89] found id: ""
	I0731 18:19:19.103742  403525 logs.go:276] 1 containers: [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75]
	I0731 18:19:19.103808  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.117954  403525 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:19:19.118042  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:19:19.196252  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:19.227134  403525 cri.go:89] found id: "eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:19.227156  403525 cri.go:89] found id: ""
	I0731 18:19:19.227164  403525 logs.go:276] 1 containers: [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71]
	I0731 18:19:19.227224  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.244526  403525 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:19:19.244600  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:19:19.302026  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:19.373138  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:19.388765  403525 cri.go:89] found id: "7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:19.388787  403525 cri.go:89] found id: ""
	I0731 18:19:19.388796  403525 logs.go:276] 1 containers: [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa]
	I0731 18:19:19.388860  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.395479  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:19:19.395546  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:19:19.478364  403525 cri.go:89] found id: "d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:19.478387  403525 cri.go:89] found id: ""
	I0731 18:19:19.478395  403525 logs.go:276] 1 containers: [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49]
	I0731 18:19:19.478446  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.485102  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:19:19.485191  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:19:19.574690  403525 cri.go:89] found id: "ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:19.574720  403525 cri.go:89] found id: ""
	I0731 18:19:19.574731  403525 logs.go:276] 1 containers: [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4]
	I0731 18:19:19.574790  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.581356  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:19:19.581424  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:19:19.637028  403525 cri.go:89] found id: "7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:19.637057  403525 cri.go:89] found id: ""
	I0731 18:19:19.637067  403525 logs.go:276] 1 containers: [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45]
	I0731 18:19:19.637118  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.647252  403525 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:19:19.647322  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:19:19.695924  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:19.742533  403525 cri.go:89] found id: ""
	I0731 18:19:19.742566  403525 logs.go:276] 0 containers: []
	W0731 18:19:19.742581  403525 logs.go:278] No container was found matching "kindnet"
	I0731 18:19:19.742594  403525 logs.go:123] Gathering logs for kubelet ...
	I0731 18:19:19.742609  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:19:19.802263  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:19.828576  403525 logs.go:123] Gathering logs for etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] ...
	I0731 18:19:19.828620  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:19.874855  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:19.912389  403525 logs.go:123] Gathering logs for coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] ...
	I0731 18:19:19.912429  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:19.961358  403525 logs.go:123] Gathering logs for kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] ...
	I0731 18:19:19.961393  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:20.006161  403525 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:19:20.006190  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:19:20.213339  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:20.269951  403525 logs.go:123] Gathering logs for dmesg ...
	I0731 18:19:20.269991  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:19:20.301109  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:20.326282  403525 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:19:20.326313  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:19:20.379286  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:20.613117  403525 logs.go:123] Gathering logs for kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] ...
	I0731 18:19:20.613155  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:20.696356  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:20.732212  403525 logs.go:123] Gathering logs for kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] ...
	I0731 18:19:20.732272  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:20.826560  403525 logs.go:123] Gathering logs for kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] ...
	I0731 18:19:20.826601  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:20.897422  403525 logs.go:123] Gathering logs for container status ...
	I0731 18:19:20.897463  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:19:21.230372  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:21.231213  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:21.234970  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:21.302479  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:21.373293  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:21.697261  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:21.802828  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:21.870492  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:22.196615  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:22.301760  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:22.371134  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:22.696346  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:22.801168  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:22.871034  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:23.199691  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:23.301451  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:23.372668  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:23.456371  403525 system_pods.go:59] 18 kube-system pods found
	I0731 18:19:23.456436  403525 system_pods.go:61] "coredns-7db6d8ff4d-kh5dt" [6887255e-d5e1-4423-9b1b-b89bd6b54f70] Running
	I0731 18:19:23.456443  403525 system_pods.go:61] "csi-hostpath-attacher-0" [03f43e9b-6d84-4f4a-b5e1-6b348f9c91d4] Running
	I0731 18:19:23.456447  403525 system_pods.go:61] "csi-hostpath-resizer-0" [bcc4df0c-9611-46c6-9717-15211248b171] Running
	I0731 18:19:23.456454  403525 system_pods.go:61] "csi-hostpathplugin-drwcw" [21a11011-6c40-4c70-bfbc-dd33b6d1fb5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 18:19:23.456458  403525 system_pods.go:61] "etcd-addons-469211" [5d9bba9b-c463-4783-ae70-a807ef84974b] Running
	I0731 18:19:23.456463  403525 system_pods.go:61] "kube-apiserver-addons-469211" [ff095112-6ec8-4380-a753-4927e2405f76] Running
	I0731 18:19:23.456467  403525 system_pods.go:61] "kube-controller-manager-addons-469211" [0f6b7f2d-cd72-452d-91a3-900b66b7dc9f] Running
	I0731 18:19:23.456470  403525 system_pods.go:61] "kube-ingress-dns-minikube" [765bcaec-3909-45a4-abcb-5d18e0090e88] Running
	I0731 18:19:23.456473  403525 system_pods.go:61] "kube-proxy-rmpj2" [6306a255-80b3-4112-bc3b-fb6a294bbd1e] Running
	I0731 18:19:23.456476  403525 system_pods.go:61] "kube-scheduler-addons-469211" [d9bb8dab-ccb7-4ee4-b61f-b21b9ae99244] Running
	I0731 18:19:23.456481  403525 system_pods.go:61] "metrics-server-c59844bb4-h86lf" [9ac7112e-a869-4a80-9630-3e06fb408aa7] Running
	I0731 18:19:23.456484  403525 system_pods.go:61] "nvidia-device-plugin-daemonset-rnrgk" [63c8e69d-6346-4ca1-869b-ff23aa567942] Running
	I0731 18:19:23.456486  403525 system_pods.go:61] "registry-698f998955-zzckf" [c1bb2989-95fe-499e-a046-21d50fcaa446] Running
	I0731 18:19:23.456489  403525 system_pods.go:61] "registry-proxy-gkcvq" [5d23ea46-e28f-4922-8b86-7e1f8ea26754] Running
	I0731 18:19:23.456492  403525 system_pods.go:61] "snapshot-controller-745499f584-8spcg" [2c8b2ba3-9621-4deb-b551-b65f868d47ec] Running
	I0731 18:19:23.456495  403525 system_pods.go:61] "snapshot-controller-745499f584-g74pq" [065f5ffc-cb71-467c-9262-a27862811292] Running
	I0731 18:19:23.456497  403525 system_pods.go:61] "storage-provisioner" [d5ca3d3e-8350-4485-9a29-3a8eff61533d] Running
	I0731 18:19:23.456501  403525 system_pods.go:61] "tiller-deploy-6677d64bcd-8hlxh" [d2d05195-43ba-4de7-91ee-2237d543c3b1] Running
	I0731 18:19:23.456508  403525 system_pods.go:74] duration metric: took 4.424178406s to wait for pod list to return data ...
	I0731 18:19:23.456518  403525 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:19:23.458634  403525 default_sa.go:45] found service account: "default"
	I0731 18:19:23.458653  403525 default_sa.go:55] duration metric: took 2.128393ms for default service account to be created ...
	I0731 18:19:23.458660  403525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:19:23.469044  403525 system_pods.go:86] 18 kube-system pods found
	I0731 18:19:23.469082  403525 system_pods.go:89] "coredns-7db6d8ff4d-kh5dt" [6887255e-d5e1-4423-9b1b-b89bd6b54f70] Running
	I0731 18:19:23.469091  403525 system_pods.go:89] "csi-hostpath-attacher-0" [03f43e9b-6d84-4f4a-b5e1-6b348f9c91d4] Running
	I0731 18:19:23.469098  403525 system_pods.go:89] "csi-hostpath-resizer-0" [bcc4df0c-9611-46c6-9717-15211248b171] Running
	I0731 18:19:23.469110  403525 system_pods.go:89] "csi-hostpathplugin-drwcw" [21a11011-6c40-4c70-bfbc-dd33b6d1fb5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 18:19:23.469119  403525 system_pods.go:89] "etcd-addons-469211" [5d9bba9b-c463-4783-ae70-a807ef84974b] Running
	I0731 18:19:23.469128  403525 system_pods.go:89] "kube-apiserver-addons-469211" [ff095112-6ec8-4380-a753-4927e2405f76] Running
	I0731 18:19:23.469134  403525 system_pods.go:89] "kube-controller-manager-addons-469211" [0f6b7f2d-cd72-452d-91a3-900b66b7dc9f] Running
	I0731 18:19:23.469141  403525 system_pods.go:89] "kube-ingress-dns-minikube" [765bcaec-3909-45a4-abcb-5d18e0090e88] Running
	I0731 18:19:23.469147  403525 system_pods.go:89] "kube-proxy-rmpj2" [6306a255-80b3-4112-bc3b-fb6a294bbd1e] Running
	I0731 18:19:23.469153  403525 system_pods.go:89] "kube-scheduler-addons-469211" [d9bb8dab-ccb7-4ee4-b61f-b21b9ae99244] Running
	I0731 18:19:23.469164  403525 system_pods.go:89] "metrics-server-c59844bb4-h86lf" [9ac7112e-a869-4a80-9630-3e06fb408aa7] Running
	I0731 18:19:23.469170  403525 system_pods.go:89] "nvidia-device-plugin-daemonset-rnrgk" [63c8e69d-6346-4ca1-869b-ff23aa567942] Running
	I0731 18:19:23.469177  403525 system_pods.go:89] "registry-698f998955-zzckf" [c1bb2989-95fe-499e-a046-21d50fcaa446] Running
	I0731 18:19:23.469186  403525 system_pods.go:89] "registry-proxy-gkcvq" [5d23ea46-e28f-4922-8b86-7e1f8ea26754] Running
	I0731 18:19:23.469193  403525 system_pods.go:89] "snapshot-controller-745499f584-8spcg" [2c8b2ba3-9621-4deb-b551-b65f868d47ec] Running
	I0731 18:19:23.469202  403525 system_pods.go:89] "snapshot-controller-745499f584-g74pq" [065f5ffc-cb71-467c-9262-a27862811292] Running
	I0731 18:19:23.469208  403525 system_pods.go:89] "storage-provisioner" [d5ca3d3e-8350-4485-9a29-3a8eff61533d] Running
	I0731 18:19:23.469214  403525 system_pods.go:89] "tiller-deploy-6677d64bcd-8hlxh" [d2d05195-43ba-4de7-91ee-2237d543c3b1] Running
	I0731 18:19:23.469226  403525 system_pods.go:126] duration metric: took 10.560847ms to wait for k8s-apps to be running ...
	I0731 18:19:23.469236  403525 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:19:23.469290  403525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:19:23.484343  403525 system_svc.go:56] duration metric: took 15.096025ms WaitForService to wait for kubelet
	I0731 18:19:23.484395  403525 kubeadm.go:582] duration metric: took 1m28.943115598s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:19:23.484423  403525 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:19:23.487522  403525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:19:23.487553  403525 node_conditions.go:123] node cpu capacity is 2
	I0731 18:19:23.487570  403525 node_conditions.go:105] duration metric: took 3.141253ms to run NodePressure ...
	I0731 18:19:23.487582  403525 start.go:241] waiting for startup goroutines ...
	I0731 18:19:23.695726  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:23.800927  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:23.870887  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:24.195218  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:24.302499  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:24.371178  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:24.695484  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:24.802764  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:24.871487  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:25.198499  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:25.301463  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:25.372077  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:25.696241  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:25.801634  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:25.870056  403525 kapi.go:107] duration metric: took 1m21.505171282s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 18:19:26.195775  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:26.300765  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:26.696089  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:26.801356  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:27.196441  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:27.301302  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:27.697983  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:27.800798  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:28.196944  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:28.301348  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:28.697257  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:28.801327  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:29.196294  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:29.301665  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:29.696872  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:29.801664  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:30.196528  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:30.300564  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:30.696015  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:30.801254  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:31.196872  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:31.301446  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:31.695773  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:31.801188  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:32.196364  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:32.301882  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:32.696040  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:32.800989  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:33.196657  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:33.301047  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:33.695775  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:33.801004  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:34.196185  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:34.301182  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:34.696092  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:34.800958  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:35.196161  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:35.301270  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:35.696662  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:35.802722  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:36.195864  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:36.301220  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:36.696885  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:36.800539  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:37.198025  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:37.301391  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:37.695476  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:37.802177  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:38.197454  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:38.301395  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:38.695516  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:38.802415  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:39.198400  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:39.302482  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:39.697109  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:39.801853  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:40.196984  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:40.301444  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:40.698370  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:40.801988  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:41.197970  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:41.301343  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:41.698484  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:41.800758  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:42.196487  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:42.302053  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:42.696150  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:42.801849  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:43.196161  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:43.301309  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:43.697416  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:43.801461  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:44.197607  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:44.300954  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:44.698302  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:44.801182  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:45.196071  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:45.300840  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:45.695891  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:45.800674  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:46.195755  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:46.300968  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:46.696330  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:46.801746  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:47.195687  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:47.301041  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:47.696561  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:47.803760  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:48.195692  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:48.300620  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:48.696036  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:48.801592  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:49.195810  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:49.302712  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:49.697201  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:49.800720  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:50.195921  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:50.301320  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:50.696685  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:50.801091  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:51.197782  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:51.302176  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:51.696197  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:51.803439  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:52.195806  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:52.302046  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:52.695953  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:52.801281  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:53.196975  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:53.301622  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:53.695638  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:53.800694  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:54.195624  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:54.301900  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:54.696131  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:54.800625  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:55.195639  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:55.301135  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:55.696541  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:55.800657  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:56.195678  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:56.302029  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:56.697310  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:56.801576  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:57.195667  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:57.300886  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:57.875007  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:57.875173  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:58.195796  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:58.300819  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:58.707374  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:58.801716  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:59.197626  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:59.301151  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:59.695100  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:59.804439  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:00.195951  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:00.302906  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:00.699386  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:00.803027  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:01.196939  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:01.304213  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:01.696902  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:01.801802  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:02.196111  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:02.301776  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:02.700369  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:02.802122  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:03.195375  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:03.301946  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:03.696225  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:03.801823  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:04.195551  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:04.301732  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:04.696950  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:04.801645  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:05.196018  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:05.301379  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:05.695912  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:05.801700  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:06.195541  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:06.302187  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:06.696502  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:06.802537  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:07.197200  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:07.301933  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:07.695694  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:07.801038  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:08.196147  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:08.303949  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:08.696135  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:08.808147  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:09.196730  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:09.300914  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:09.696502  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:09.801172  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:10.196645  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:10.301610  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:10.696405  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:10.803834  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:11.195543  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:11.301332  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:11.696392  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:11.802054  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:12.195994  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:12.301324  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:12.696659  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:12.803772  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:13.195603  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:13.300686  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:13.695855  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:13.803052  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:14.196546  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:14.303073  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:14.696183  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:14.801818  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:15.196058  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:15.302117  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:15.696996  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:15.801685  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:16.196648  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:16.301935  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:16.695621  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:16.801151  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:17.195697  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:17.301025  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:17.696281  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:17.801233  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:18.196489  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:18.300604  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:18.695672  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:18.800934  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:19.196308  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:19.301712  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:19.696333  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:19.801219  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:20.196204  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:20.301140  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:20.696588  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:20.800399  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:21.194990  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:21.303471  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:21.881438  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:21.882192  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:22.195899  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:22.301018  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:22.696031  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:22.801084  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:23.195661  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:23.301004  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:23.696295  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:23.801831  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:24.518414  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:24.518706  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:24.695391  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:24.801370  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:25.195876  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:25.301875  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:25.696349  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:25.803810  403525 kapi.go:107] duration metric: took 2m23.007579325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 18:20:26.195448  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:26.696321  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:27.196013  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:27.695324  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:28.196218  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:28.696103  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:29.199643  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:29.696435  403525 kapi.go:107] duration metric: took 2m23.00437999s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 18:20:29.698082  403525 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-469211 cluster.
	I0731 18:20:29.699465  403525 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 18:20:29.701014  403525 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 18:20:29.702449  403525 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 18:20:29.704000  403525 addons.go:510] duration metric: took 2m35.162698923s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 18:20:29.704052  403525 start.go:246] waiting for cluster config update ...
	I0731 18:20:29.704073  403525 start.go:255] writing updated cluster config ...
	I0731 18:20:29.704366  403525 ssh_runner.go:195] Run: rm -f paused
	I0731 18:20:29.757590  403525 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:20:29.759528  403525 out.go:177] * Done! kubectl is now configured to use "addons-469211" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 18:24:07 addons-469211 crio[683]: time="2024-07-31 18:24:07.981753182Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450247981726526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53050880-019c-47b7-950a-636050654a07 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:07 addons-469211 crio[683]: time="2024-07-31 18:24:07.982285860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d62d0d3-158f-4804-95b4-955ff99cfd10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:07 addons-469211 crio[683]: time="2024-07-31 18:24:07.982347141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d62d0d3-158f-4804-95b4-955ff99cfd10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:07 addons-469211 crio[683]: time="2024-07-31 18:24:07.982625899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4698288a9927ce57c2c8f4bd219b31023f842b12c6b2f9fcc8ac894ae32a81b,PodSandboxId:ed12e84c02103ba2810d0fbe4b2836c51f41454021c4b726e215d3d14e2cd333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449952091032424,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-btm7m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1aab9365-0296-4343-b583-41ac0c9a3de4,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1f7f39df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bec0543a2acc2df9fb43770d908e693188fb53f54872adfdabd3c95578b2766,PodSandboxId:a8c7408bb65bcf13f84c0df678025b95d8fcd060d38eb3a0bd5332dda2a1da8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449951783835388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rlvgd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 416e6
222-209d-4023-898c-f09ad71dcb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 20cd3ffc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:
1722449855377624034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17224498
55411737181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d62d0d3-158f-4804-95b4-955ff99cfd10 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.024062728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7205811d-1a90-4a54-a219-e807a996ddb4 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.024136854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7205811d-1a90-4a54-a219-e807a996ddb4 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.025497287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb1d0d2e-5d3f-4957-a8b7-c1595db59678 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.031577457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450248031544367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb1d0d2e-5d3f-4957-a8b7-c1595db59678 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.033195773Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58f7068e-e5cf-42b2-b817-1538ca239076 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.033285716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58f7068e-e5cf-42b2-b817-1538ca239076 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.033597955Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4698288a9927ce57c2c8f4bd219b31023f842b12c6b2f9fcc8ac894ae32a81b,PodSandboxId:ed12e84c02103ba2810d0fbe4b2836c51f41454021c4b726e215d3d14e2cd333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449952091032424,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-btm7m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1aab9365-0296-4343-b583-41ac0c9a3de4,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1f7f39df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bec0543a2acc2df9fb43770d908e693188fb53f54872adfdabd3c95578b2766,PodSandboxId:a8c7408bb65bcf13f84c0df678025b95d8fcd060d38eb3a0bd5332dda2a1da8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449951783835388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rlvgd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 416e6
222-209d-4023-898c-f09ad71dcb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 20cd3ffc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:
1722449855377624034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17224498
55411737181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58f7068e-e5cf-42b2-b817-1538ca239076 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.075194908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b0bdd8c-26d2-4f08-9839-201e033de1a7 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.075296349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b0bdd8c-26d2-4f08-9839-201e033de1a7 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.077155328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ebff818e-9b60-4a1f-97ff-23eae6755132 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.078385738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450248078359891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebff818e-9b60-4a1f-97ff-23eae6755132 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.079247181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c94c6626-4b92-4c1b-a584-9b76c1f692ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.079321038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c94c6626-4b92-4c1b-a584-9b76c1f692ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.079618542Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4698288a9927ce57c2c8f4bd219b31023f842b12c6b2f9fcc8ac894ae32a81b,PodSandboxId:ed12e84c02103ba2810d0fbe4b2836c51f41454021c4b726e215d3d14e2cd333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449952091032424,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-btm7m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1aab9365-0296-4343-b583-41ac0c9a3de4,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1f7f39df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bec0543a2acc2df9fb43770d908e693188fb53f54872adfdabd3c95578b2766,PodSandboxId:a8c7408bb65bcf13f84c0df678025b95d8fcd060d38eb3a0bd5332dda2a1da8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449951783835388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rlvgd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 416e6
222-209d-4023-898c-f09ad71dcb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 20cd3ffc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:
1722449855377624034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17224498
55411737181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c94c6626-4b92-4c1b-a584-9b76c1f692ad name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.117128771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d78e93bf-9088-428c-850a-b8a04af50416 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.117202089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d78e93bf-9088-428c-850a-b8a04af50416 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.118782180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17e61e94-5013-4268-a87c-8a3c198ac7ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.120210356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450248120179663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17e61e94-5013-4268-a87c-8a3c198ac7ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.120747947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=025afdbc-c6e8-4ad8-844d-b757f1c199af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.120807470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=025afdbc-c6e8-4ad8-844d-b757f1c199af name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:24:08 addons-469211 crio[683]: time="2024-07-31 18:24:08.121354347Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4698288a9927ce57c2c8f4bd219b31023f842b12c6b2f9fcc8ac894ae32a81b,PodSandboxId:ed12e84c02103ba2810d0fbe4b2836c51f41454021c4b726e215d3d14e2cd333,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449952091032424,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-btm7m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1aab9365-0296-4343-b583-41ac0c9a3de4,},Anno
tations:map[string]string{io.kubernetes.container.hash: 1f7f39df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bec0543a2acc2df9fb43770d908e693188fb53f54872adfdabd3c95578b2766,PodSandboxId:a8c7408bb65bcf13f84c0df678025b95d8fcd060d38eb3a0bd5332dda2a1da8a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722449951783835388,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-rlvgd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 416e6
222-209d-4023-898c-f09ad71dcb4d,},Annotations:map[string]string{io.kubernetes.container.hash: 20cd3ffc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: stora
ge-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.nam
e: coredns-7db6d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd
422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a8
99,State:CONTAINER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:
1722449855377624034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17224498
55411737181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=025afdbc-c6e8-4ad8-844d-b757f1c199af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	600e268c9b4c7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   f45c4c6caa683       hello-world-app-6778b5fc9f-tltx6
	95c4be4243996       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   60aba2a2c1db3       nginx
	6c0e8d72cb166       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   e1855c794cdd1       busybox
	d4698288a9927       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             4 minutes ago       Exited              patch                     1                   ed12e84c02103       ingress-nginx-admission-patch-btm7m
	0bec0543a2acc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   4 minutes ago       Exited              create                    0                   a8c7408bb65bc       ingress-nginx-admission-create-rlvgd
	9b2fedbba32da       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        6 minutes ago       Running             metrics-server            0                   0347af01ac0ac       metrics-server-c59844bb4-h86lf
	ee60a7abb89e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago       Running             storage-provisioner       0                   6827f9cf8e470       storage-provisioner
	7536452f0fcb4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             6 minutes ago       Running             coredns                   0                   1d35d21e498d6       coredns-7db6d8ff4d-kh5dt
	ba90a67fa1aa5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             6 minutes ago       Running             kube-proxy                0                   888165c5f82a2       kube-proxy-rmpj2
	eb02210ee6a3d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   ad37fbb5b0218       etcd-addons-469211
	d85e7e6d15dcb       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             6 minutes ago       Running             kube-scheduler            0                   60a993a58627d       kube-scheduler-addons-469211
	7b5254e9d9289       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             6 minutes ago       Running             kube-controller-manager   0                   f04044c1f36fa       kube-controller-manager-addons-469211
	13115e6c0aea5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             6 minutes ago       Running             kube-apiserver            0                   a0dfcf2af5767       kube-apiserver-addons-469211
	
	
	==> coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] <==
	[INFO] 10.244.0.8:51881 - 54780 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152344s
	[INFO] 10.244.0.8:56392 - 37732 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150094s
	[INFO] 10.244.0.8:56392 - 17511 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000177352s
	[INFO] 10.244.0.8:50233 - 17950 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000150563s
	[INFO] 10.244.0.8:50233 - 23840 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000639972s
	[INFO] 10.244.0.8:51520 - 50804 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089098s
	[INFO] 10.244.0.8:51520 - 43382 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000059882s
	[INFO] 10.244.0.8:50450 - 25131 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007004s
	[INFO] 10.244.0.8:50450 - 19750 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00002636s
	[INFO] 10.244.0.8:59271 - 38940 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084895s
	[INFO] 10.244.0.8:59271 - 25630 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012547s
	[INFO] 10.244.0.8:55317 - 33831 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059959s
	[INFO] 10.244.0.8:55317 - 33829 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041544s
	[INFO] 10.244.0.8:35059 - 49500 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000053004s
	[INFO] 10.244.0.8:35059 - 30303 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102092s
	[INFO] 10.244.0.22:40097 - 41793 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000590028s
	[INFO] 10.244.0.22:60712 - 7890 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151206s
	[INFO] 10.244.0.22:58114 - 47304 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124492s
	[INFO] 10.244.0.22:54165 - 29774 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000064445s
	[INFO] 10.244.0.22:54634 - 54026 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008677s
	[INFO] 10.244.0.22:34467 - 55534 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000055567s
	[INFO] 10.244.0.22:58813 - 27925 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000642159s
	[INFO] 10.244.0.22:36111 - 33575 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000980802s
	[INFO] 10.244.0.27:55602 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000594832s
	[INFO] 10.244.0.27:35466 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163617s
	
	
	==> describe nodes <==
	Name:               addons-469211
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-469211
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=addons-469211
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_17_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-469211
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:17:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-469211
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:24:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:21:45 +0000   Wed, 31 Jul 2024 18:17:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:21:45 +0000   Wed, 31 Jul 2024 18:17:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:21:45 +0000   Wed, 31 Jul 2024 18:17:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:21:45 +0000   Wed, 31 Jul 2024 18:17:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    addons-469211
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb416ebe65de43479a11c073f8c2776c
	  System UUID:                fb416ebe-65de-4347-9a11-c073f8c2776c
	  Boot ID:                    83f919af-68de-44e4-bc69-505ed3b07279
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  default                     hello-world-app-6778b5fc9f-tltx6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 coredns-7db6d8ff4d-kh5dt                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m14s
	  kube-system                 etcd-addons-469211                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-apiserver-addons-469211             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-controller-manager-addons-469211    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-proxy-rmpj2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-scheduler-addons-469211             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 metrics-server-c59844bb4-h86lf           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m10s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m11s                  kube-proxy       
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node addons-469211 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node addons-469211 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node addons-469211 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s                  kubelet          Node addons-469211 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s                  kubelet          Node addons-469211 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s                  kubelet          Node addons-469211 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m27s                  kubelet          Node addons-469211 status is now: NodeReady
	  Normal  RegisteredNode           6m15s                  node-controller  Node addons-469211 event: Registered Node addons-469211 in Controller
	
	
	==> dmesg <==
	[Jul31 18:18] kauditd_printk_skb: 83 callbacks suppressed
	[ +10.790338] kauditd_printk_skb: 138 callbacks suppressed
	[ +22.950617] kauditd_printk_skb: 4 callbacks suppressed
	[Jul31 18:19] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.550451] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.602192] kauditd_printk_skb: 76 callbacks suppressed
	[ +16.458374] kauditd_printk_skb: 14 callbacks suppressed
	[ +22.058454] kauditd_printk_skb: 24 callbacks suppressed
	[Jul31 18:20] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.971245] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.394336] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.254955] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.198225] kauditd_printk_skb: 34 callbacks suppressed
	[ +10.275643] kauditd_printk_skb: 24 callbacks suppressed
	[Jul31 18:21] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.299228] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.934665] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.172066] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.609330] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.773715] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.235417] kauditd_printk_skb: 10 callbacks suppressed
	[ +10.180409] kauditd_printk_skb: 15 callbacks suppressed
	[Jul31 18:22] kauditd_printk_skb: 33 callbacks suppressed
	[Jul31 18:23] kauditd_printk_skb: 6 callbacks suppressed
	[Jul31 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] <==
	{"level":"warn","ts":"2024-07-31T18:20:21.868733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.938551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-31T18:20:21.868775Z","caller":"traceutil/trace.go:171","msg":"trace[1764405769] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1293; }","duration":"185.02019ms","start":"2024-07-31T18:20:21.683748Z","end":"2024-07-31T18:20:21.868768Z","steps":["trace[1764405769] 'agreement among raft nodes before linearized reading'  (duration: 184.900232ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:20:24.505149Z","caller":"traceutil/trace.go:171","msg":"trace[913952359] linearizableReadLoop","detail":"{readStateIndex:1351; appliedIndex:1350; }","duration":"321.27837ms","start":"2024-07-31T18:20:24.183856Z","end":"2024-07-31T18:20:24.505134Z","steps":["trace[913952359] 'read index received'  (duration: 320.974021ms)","trace[913952359] 'applied index is now lower than readState.Index'  (duration: 303.869µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T18:20:24.505305Z","caller":"traceutil/trace.go:171","msg":"trace[765517706] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"476.891773ms","start":"2024-07-31T18:20:24.028406Z","end":"2024-07-31T18:20:24.505298Z","steps":["trace[765517706] 'process raft request'  (duration: 476.462981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:20:24.505437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.186227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-31T18:20:24.505493Z","caller":"traceutil/trace.go:171","msg":"trace[472395463] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1296; }","duration":"217.276574ms","start":"2024-07-31T18:20:24.288208Z","end":"2024-07-31T18:20:24.505485Z","steps":["trace[472395463] 'agreement among raft nodes before linearized reading'  (duration: 217.135101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:20:24.505587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.749165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-31T18:20:24.505625Z","caller":"traceutil/trace.go:171","msg":"trace[1691468399] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1296; }","duration":"321.811979ms","start":"2024-07-31T18:20:24.183807Z","end":"2024-07-31T18:20:24.505619Z","steps":["trace[1691468399] 'agreement among raft nodes before linearized reading'  (duration: 321.722907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:20:24.505736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:20:24.183794Z","time spent":"321.867252ms","remote":"127.0.0.1:41006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4391,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-31T18:20:24.505454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:20:24.028386Z","time spent":"476.96416ms","remote":"127.0.0.1:41000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1294 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T18:21:00.509458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.796315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T18:21:00.509533Z","caller":"traceutil/trace.go:171","msg":"trace[1113990574] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1514; }","duration":"341.981307ms","start":"2024-07-31T18:21:00.167532Z","end":"2024-07-31T18:21:00.509513Z","steps":["trace[1113990574] 'range keys from in-memory index tree'  (duration: 341.745763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:21:00.509566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:21:00.167519Z","time spent":"342.037984ms","remote":"127.0.0.1:40848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-31T18:21:33.139639Z","caller":"traceutil/trace.go:171","msg":"trace[1924874950] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"244.266945ms","start":"2024-07-31T18:21:32.895354Z","end":"2024-07-31T18:21:33.139621Z","steps":["trace[1924874950] 'process raft request'  (duration: 244.17525ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:21:33.139799Z","caller":"traceutil/trace.go:171","msg":"trace[2056968767] linearizableReadLoop","detail":"{readStateIndex:1883; appliedIndex:1883; }","duration":"215.626454ms","start":"2024-07-31T18:21:32.924158Z","end":"2024-07-31T18:21:33.139785Z","steps":["trace[2056968767] 'read index received'  (duration: 215.620709ms)","trace[2056968767] 'applied index is now lower than readState.Index'  (duration: 4.915µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T18:21:33.140102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.927984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8057"}
	{"level":"info","ts":"2024-07-31T18:21:33.140126Z","caller":"traceutil/trace.go:171","msg":"trace[1479707747] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1805; }","duration":"215.992251ms","start":"2024-07-31T18:21:32.924127Z","end":"2024-07-31T18:21:33.140119Z","steps":["trace[1479707747] 'agreement among raft nodes before linearized reading'  (duration: 215.73337ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:21:33.142508Z","caller":"traceutil/trace.go:171","msg":"trace[1879085037] transaction","detail":"{read_only:false; response_revision:1806; number_of_response:1; }","duration":"217.240015ms","start":"2024-07-31T18:21:32.925259Z","end":"2024-07-31T18:21:33.142499Z","steps":["trace[1879085037] 'process raft request'  (duration: 217.1653ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:21:35.586145Z","caller":"traceutil/trace.go:171","msg":"trace[2122175916] linearizableReadLoop","detail":"{readStateIndex:1887; appliedIndex:1886; }","duration":"439.964432ms","start":"2024-07-31T18:21:35.146157Z","end":"2024-07-31T18:21:35.586122Z","steps":["trace[2122175916] 'read index received'  (duration: 365.253182ms)","trace[2122175916] 'applied index is now lower than readState.Index'  (duration: 74.70998ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T18:21:35.586375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"440.18852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-31T18:21:35.586415Z","caller":"traceutil/trace.go:171","msg":"trace[1964439360] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1808; }","duration":"440.303507ms","start":"2024-07-31T18:21:35.146103Z","end":"2024-07-31T18:21:35.586407Z","steps":["trace[1964439360] 'agreement among raft nodes before linearized reading'  (duration: 440.13877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:21:35.586443Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:21:35.146091Z","time spent":"440.346397ms","remote":"127.0.0.1:41000","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-31T18:21:35.586639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.320664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T18:21:35.586675Z","caller":"traceutil/trace.go:171","msg":"trace[1298070854] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1808; }","duration":"419.378907ms","start":"2024-07-31T18:21:35.167289Z","end":"2024-07-31T18:21:35.586668Z","steps":["trace[1298070854] 'agreement among raft nodes before linearized reading'  (duration: 419.327327ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:21:35.586698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:21:35.167255Z","time spent":"419.438466ms","remote":"127.0.0.1:40848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 18:24:08 up 7 min,  0 users,  load average: 0.26, 0.85, 0.54
	Linux addons-469211 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] <==
	E0731 18:19:09.548080       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.223.16:443: connect: connection refused
	E0731 18:19:09.552420       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.223.16:443: connect: connection refused
	I0731 18:19:09.669519       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0731 18:20:42.278725       1 conn.go:339] Error on socket receive: read tcp 192.168.39.187:8443->192.168.39.1:36566: use of closed network connection
	I0731 18:20:51.793556       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.34.248"}
	E0731 18:21:22.512739       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0731 18:21:24.930226       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 18:21:26.038342       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 18:21:30.465089       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 18:21:30.677078       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.27.6"}
	I0731 18:21:42.990060       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 18:22:02.468449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.468787       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.499650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.500137       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.526441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.526510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.538039       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.538091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.572428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.572716       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 18:22:03.527044       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 18:22:03.565762       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 18:22:03.588494       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 18:23:57.662754       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.152.232"}
	
	
	==> kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] <==
	W0731 18:22:46.373143       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:22:46.373257       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:23:11.587226       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:23:11.587266       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:23:13.136093       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:23:13.136315       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:23:18.911837       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:23:18.912055       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:23:36.375360       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:23:36.375448       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:23:50.542406       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:23:50.542469       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:23:51.896887       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:23:51.896999       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 18:23:57.503373       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="36.698215ms"
	I0731 18:23:57.511563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="8.059178ms"
	I0731 18:23:57.513008       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="154.062µs"
	I0731 18:23:57.519492       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.016µs"
	I0731 18:24:00.105256       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0731 18:24:00.125032       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0731 18:24:00.127005       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="8.164µs"
	W0731 18:24:00.393807       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:24:00.394023       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 18:24:01.499353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.640196ms"
	I0731 18:24:01.500533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="47.877µs"
	
	
	==> kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] <==
	I0731 18:17:56.320752       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:17:56.344864       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	I0731 18:17:56.436405       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:17:56.436477       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:17:56.436495       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:17:56.440356       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:17:56.440551       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:17:56.440580       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:17:56.442400       1 config.go:192] "Starting service config controller"
	I0731 18:17:56.442435       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:17:56.442463       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:17:56.442467       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:17:56.442923       1 config.go:319] "Starting node config controller"
	I0731 18:17:56.443000       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:17:56.543043       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:17:56.543085       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:17:56.543109       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] <==
	W0731 18:17:38.898226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:17:38.898341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:17:38.912789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:17:38.912888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:17:38.983641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 18:17:38.983815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 18:17:38.998251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 18:17:38.998305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 18:17:39.051449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:17:39.051618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:17:39.093884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:17:39.094040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 18:17:39.104824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:17:39.104926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:17:39.150842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:17:39.150927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 18:17:39.160439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 18:17:39.160487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 18:17:39.171110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 18:17:39.171155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 18:17:39.182728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 18:17:39.182882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 18:17:39.245643       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:17:39.246484       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 18:17:42.444836       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:23:57 addons-469211 kubelet[1274]: I0731 18:23:57.494216    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a11011-6c40-4c70-bfbc-dd33b6d1fb5d" containerName="csi-snapshotter"
	Jul 31 18:23:57 addons-469211 kubelet[1274]: I0731 18:23:57.494247    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="03f43e9b-6d84-4f4a-b5e1-6b348f9c91d4" containerName="csi-attacher"
	Jul 31 18:23:57 addons-469211 kubelet[1274]: I0731 18:23:57.494277    1274 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a11011-6c40-4c70-bfbc-dd33b6d1fb5d" containerName="liveness-probe"
	Jul 31 18:23:57 addons-469211 kubelet[1274]: I0731 18:23:57.592131    1274 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77wfp\" (UniqueName: \"kubernetes.io/projected/5d8865e5-0846-480c-9d65-d004373b16c8-kube-api-access-77wfp\") pod \"hello-world-app-6778b5fc9f-tltx6\" (UID: \"5d8865e5-0846-480c-9d65-d004373b16c8\") " pod="default/hello-world-app-6778b5fc9f-tltx6"
	Jul 31 18:23:58 addons-469211 kubelet[1274]: I0731 18:23:58.599339    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6xzh\" (UniqueName: \"kubernetes.io/projected/765bcaec-3909-45a4-abcb-5d18e0090e88-kube-api-access-h6xzh\") pod \"765bcaec-3909-45a4-abcb-5d18e0090e88\" (UID: \"765bcaec-3909-45a4-abcb-5d18e0090e88\") "
	Jul 31 18:23:58 addons-469211 kubelet[1274]: I0731 18:23:58.601576    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765bcaec-3909-45a4-abcb-5d18e0090e88-kube-api-access-h6xzh" (OuterVolumeSpecName: "kube-api-access-h6xzh") pod "765bcaec-3909-45a4-abcb-5d18e0090e88" (UID: "765bcaec-3909-45a4-abcb-5d18e0090e88"). InnerVolumeSpecName "kube-api-access-h6xzh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 18:23:58 addons-469211 kubelet[1274]: I0731 18:23:58.700733    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h6xzh\" (UniqueName: \"kubernetes.io/projected/765bcaec-3909-45a4-abcb-5d18e0090e88-kube-api-access-h6xzh\") on node \"addons-469211\" DevicePath \"\""
	Jul 31 18:23:59 addons-469211 kubelet[1274]: I0731 18:23:59.460301    1274 scope.go:117] "RemoveContainer" containerID="faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4"
	Jul 31 18:23:59 addons-469211 kubelet[1274]: I0731 18:23:59.499468    1274 scope.go:117] "RemoveContainer" containerID="faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4"
	Jul 31 18:23:59 addons-469211 kubelet[1274]: E0731 18:23:59.499928    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4\": container with ID starting with faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4 not found: ID does not exist" containerID="faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4"
	Jul 31 18:23:59 addons-469211 kubelet[1274]: I0731 18:23:59.500024    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4"} err="failed to get container status \"faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4\": rpc error: code = NotFound desc = could not find container \"faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4\": container with ID starting with faa9d5bc9dff7342c3a0cfea89d697cb668856e9112508c14020466c3baedbf4 not found: ID does not exist"
	Jul 31 18:24:00 addons-469211 kubelet[1274]: I0731 18:24:00.626382    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aab9365-0296-4343-b583-41ac0c9a3de4" path="/var/lib/kubelet/pods/1aab9365-0296-4343-b583-41ac0c9a3de4/volumes"
	Jul 31 18:24:00 addons-469211 kubelet[1274]: I0731 18:24:00.627141    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416e6222-209d-4023-898c-f09ad71dcb4d" path="/var/lib/kubelet/pods/416e6222-209d-4023-898c-f09ad71dcb4d/volumes"
	Jul 31 18:24:00 addons-469211 kubelet[1274]: I0731 18:24:00.627695    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765bcaec-3909-45a4-abcb-5d18e0090e88" path="/var/lib/kubelet/pods/765bcaec-3909-45a4-abcb-5d18e0090e88/volumes"
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.436113    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkbbr\" (UniqueName: \"kubernetes.io/projected/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db-kube-api-access-vkbbr\") pod \"091f8460-1f6c-4bb6-9d1b-e4e72feeb6db\" (UID: \"091f8460-1f6c-4bb6-9d1b-e4e72feeb6db\") "
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.436173    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db-webhook-cert\") pod \"091f8460-1f6c-4bb6-9d1b-e4e72feeb6db\" (UID: \"091f8460-1f6c-4bb6-9d1b-e4e72feeb6db\") "
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.438570    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "091f8460-1f6c-4bb6-9d1b-e4e72feeb6db" (UID: "091f8460-1f6c-4bb6-9d1b-e4e72feeb6db"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.440173    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db-kube-api-access-vkbbr" (OuterVolumeSpecName: "kube-api-access-vkbbr") pod "091f8460-1f6c-4bb6-9d1b-e4e72feeb6db" (UID: "091f8460-1f6c-4bb6-9d1b-e4e72feeb6db"). InnerVolumeSpecName "kube-api-access-vkbbr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.489914    1274 scope.go:117] "RemoveContainer" containerID="26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681"
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.509860    1274 scope.go:117] "RemoveContainer" containerID="26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681"
	Jul 31 18:24:03 addons-469211 kubelet[1274]: E0731 18:24:03.510388    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681\": container with ID starting with 26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681 not found: ID does not exist" containerID="26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681"
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.510431    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681"} err="failed to get container status \"26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681\": rpc error: code = NotFound desc = could not find container \"26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681\": container with ID starting with 26c6a07dadf37d80788941dcd153f4467c5ba8311ec1aadff80ad25404d4c681 not found: ID does not exist"
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.537428    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vkbbr\" (UniqueName: \"kubernetes.io/projected/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db-kube-api-access-vkbbr\") on node \"addons-469211\" DevicePath \"\""
	Jul 31 18:24:03 addons-469211 kubelet[1274]: I0731 18:24:03.537488    1274 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db-webhook-cert\") on node \"addons-469211\" DevicePath \"\""
	Jul 31 18:24:04 addons-469211 kubelet[1274]: I0731 18:24:04.627567    1274 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="091f8460-1f6c-4bb6-9d1b-e4e72feeb6db" path="/var/lib/kubelet/pods/091f8460-1f6c-4bb6-9d1b-e4e72feeb6db/volumes"
	
	
	==> storage-provisioner [ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352] <==
	I0731 18:18:05.211872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:18:05.223080       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:18:05.223129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:18:05.242789       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:18:05.243158       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-469211_dca2c41c-b1f5-41aa-831d-6265d8233ecb!
	I0731 18:18:05.243293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb3c91f-ce3b-43a5-b167-157fd9383e5c", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-469211_dca2c41c-b1f5-41aa-831d-6265d8233ecb became leader
	I0731 18:18:05.343837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-469211_dca2c41c-b1f5-41aa-831d-6265d8233ecb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-469211 -n addons-469211
helpers_test.go:261: (dbg) Run:  kubectl --context addons-469211 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (159.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.555917ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-h86lf" [9ac7112e-a869-4a80-9630-3e06fb408aa7] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005189998s
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (75.747829ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 3m32.098665663s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (79.394046ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 3m34.498867736s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (64.354638ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 3m38.501450954s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (70.016309ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 3m42.981897515s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (67.085927ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 3m53.634923831s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (67.566707ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 4m3.550680492s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (65.782725ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 4m16.991820569s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (71.338487ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 4m51.59861776s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (71.227822ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 6m6.42249929s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (62.568148ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 7m20.543914488s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (66.131685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 8m8.740920342s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (66.899901ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 8m43.879089453s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-469211 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-469211 top pods -n kube-system: exit status 1 (64.890068ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-kh5dt, age: 9m14.131552784s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-469211 -n addons-469211
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 logs -n 25: (1.383392883s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-127403                                                                     | download-only-127403 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-995532 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | binary-mirror-995532                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39497                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-995532                                                                     | binary-mirror-995532 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| addons  | enable dashboard -p                                                                         | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-469211 --wait=true                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:20 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:20 UTC | 31 Jul 24 18:20 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:20 UTC | 31 Jul 24 18:20 UTC |
	|         | -p addons-469211                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:20 UTC | 31 Jul 24 18:20 UTC |
	|         | -p addons-469211                                                                            |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-469211 ssh cat                                                                       | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | /opt/local-path-provisioner/pvc-81750708-88a8-4465-b0b3-553afcc3b33e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-469211 ip                                                                            | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:21 UTC |
	|         | addons-469211                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-469211 ssh curl -s                                                                   | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-469211 addons                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:21 UTC | 31 Jul 24 18:22 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-469211 addons                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:22 UTC | 31 Jul 24 18:22 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-469211 ip                                                                            | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:23 UTC | 31 Jul 24 18:23 UTC |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:23 UTC | 31 Jul 24 18:23 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-469211 addons disable                                                                | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:23 UTC | 31 Jul 24 18:24 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-469211 addons                                                                        | addons-469211        | jenkins | v1.33.1 | 31 Jul 24 18:27 UTC | 31 Jul 24 18:27 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:16:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:16:57.860730  403525 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:16:57.860851  403525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:16:57.860861  403525 out.go:304] Setting ErrFile to fd 2...
	I0731 18:16:57.860865  403525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:16:57.861074  403525 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:16:57.861736  403525 out.go:298] Setting JSON to false
	I0731 18:16:57.862670  403525 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7161,"bootTime":1722442657,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:16:57.862737  403525 start.go:139] virtualization: kvm guest
	I0731 18:16:57.865057  403525 out.go:177] * [addons-469211] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:16:57.866513  403525 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:16:57.866561  403525 notify.go:220] Checking for updates...
	I0731 18:16:57.869388  403525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:16:57.870865  403525 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:16:57.872231  403525 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:57.873639  403525 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:16:57.875087  403525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:16:57.876630  403525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:16:57.908294  403525 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 18:16:57.909524  403525 start.go:297] selected driver: kvm2
	I0731 18:16:57.909537  403525 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:16:57.909549  403525 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:16:57.910288  403525 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:16:57.910369  403525 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:16:57.926051  403525 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:16:57.926113  403525 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:16:57.926352  403525 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:16:57.926382  403525 cni.go:84] Creating CNI manager for ""
	I0731 18:16:57.926391  403525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:16:57.926405  403525 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:16:57.926482  403525 start.go:340] cluster config:
	{Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:16:57.926578  403525 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:16:57.929283  403525 out.go:177] * Starting "addons-469211" primary control-plane node in "addons-469211" cluster
	I0731 18:16:57.930889  403525 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:16:57.930935  403525 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:16:57.930943  403525 cache.go:56] Caching tarball of preloaded images
	I0731 18:16:57.931033  403525 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:16:57.931043  403525 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:16:57.931398  403525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/config.json ...
	I0731 18:16:57.931423  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/config.json: {Name:mkde003688e571a7e4f73417fb328fe2240f62d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:16:57.931561  403525 start.go:360] acquireMachinesLock for addons-469211: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:16:57.931603  403525 start.go:364] duration metric: took 29.403µs to acquireMachinesLock for "addons-469211"
	I0731 18:16:57.931621  403525 start.go:93] Provisioning new machine with config: &{Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:16:57.931690  403525 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 18:16:57.933591  403525 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 18:16:57.933739  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:16:57.933788  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:16:57.948702  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37785
	I0731 18:16:57.949234  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:16:57.949863  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:16:57.949886  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:16:57.950307  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:16:57.950492  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:16:57.950636  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:16:57.950794  403525 start.go:159] libmachine.API.Create for "addons-469211" (driver="kvm2")
	I0731 18:16:57.950824  403525 client.go:168] LocalClient.Create starting
	I0731 18:16:57.950889  403525 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:16:58.183978  403525 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:16:58.530021  403525 main.go:141] libmachine: Running pre-create checks...
	I0731 18:16:58.530048  403525 main.go:141] libmachine: (addons-469211) Calling .PreCreateCheck
	I0731 18:16:58.530577  403525 main.go:141] libmachine: (addons-469211) Calling .GetConfigRaw
	I0731 18:16:58.531091  403525 main.go:141] libmachine: Creating machine...
	I0731 18:16:58.531107  403525 main.go:141] libmachine: (addons-469211) Calling .Create
	I0731 18:16:58.531237  403525 main.go:141] libmachine: (addons-469211) Creating KVM machine...
	I0731 18:16:58.532511  403525 main.go:141] libmachine: (addons-469211) DBG | found existing default KVM network
	I0731 18:16:58.533337  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:58.533166  403547 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011d1f0}
	I0731 18:16:58.533354  403525 main.go:141] libmachine: (addons-469211) DBG | created network xml: 
	I0731 18:16:58.533364  403525 main.go:141] libmachine: (addons-469211) DBG | <network>
	I0731 18:16:58.533369  403525 main.go:141] libmachine: (addons-469211) DBG |   <name>mk-addons-469211</name>
	I0731 18:16:58.533382  403525 main.go:141] libmachine: (addons-469211) DBG |   <dns enable='no'/>
	I0731 18:16:58.533389  403525 main.go:141] libmachine: (addons-469211) DBG |   
	I0731 18:16:58.533399  403525 main.go:141] libmachine: (addons-469211) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 18:16:58.533409  403525 main.go:141] libmachine: (addons-469211) DBG |     <dhcp>
	I0731 18:16:58.533420  403525 main.go:141] libmachine: (addons-469211) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 18:16:58.533431  403525 main.go:141] libmachine: (addons-469211) DBG |     </dhcp>
	I0731 18:16:58.533440  403525 main.go:141] libmachine: (addons-469211) DBG |   </ip>
	I0731 18:16:58.533454  403525 main.go:141] libmachine: (addons-469211) DBG |   
	I0731 18:16:58.533460  403525 main.go:141] libmachine: (addons-469211) DBG | </network>
	I0731 18:16:58.533464  403525 main.go:141] libmachine: (addons-469211) DBG | 
	I0731 18:16:58.539069  403525 main.go:141] libmachine: (addons-469211) DBG | trying to create private KVM network mk-addons-469211 192.168.39.0/24...
	I0731 18:16:58.603615  403525 main.go:141] libmachine: (addons-469211) DBG | private KVM network mk-addons-469211 192.168.39.0/24 created
	I0731 18:16:58.603648  403525 main.go:141] libmachine: (addons-469211) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211 ...
	I0731 18:16:58.603690  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:58.603623  403547 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:58.603711  403525 main.go:141] libmachine: (addons-469211) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:16:58.603769  403525 main.go:141] libmachine: (addons-469211) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:16:58.882586  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:58.882358  403547 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa...
	I0731 18:16:59.075233  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:59.075092  403547 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/addons-469211.rawdisk...
	I0731 18:16:59.075264  403525 main.go:141] libmachine: (addons-469211) DBG | Writing magic tar header
	I0731 18:16:59.075281  403525 main.go:141] libmachine: (addons-469211) DBG | Writing SSH key tar header
	I0731 18:16:59.075298  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:16:59.075211  403547 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211 ...
	I0731 18:16:59.075313  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211
	I0731 18:16:59.075352  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:16:59.075370  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211 (perms=drwx------)
	I0731 18:16:59.075382  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:59.075439  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:16:59.075471  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:16:59.075482  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:16:59.075497  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:16:59.075521  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:16:59.075535  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:16:59.075545  403525 main.go:141] libmachine: (addons-469211) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:16:59.075554  403525 main.go:141] libmachine: (addons-469211) Creating domain...
	I0731 18:16:59.075564  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:16:59.075576  403525 main.go:141] libmachine: (addons-469211) DBG | Checking permissions on dir: /home
	I0731 18:16:59.075588  403525 main.go:141] libmachine: (addons-469211) DBG | Skipping /home - not owner
	I0731 18:16:59.076534  403525 main.go:141] libmachine: (addons-469211) define libvirt domain using xml: 
	I0731 18:16:59.076563  403525 main.go:141] libmachine: (addons-469211) <domain type='kvm'>
	I0731 18:16:59.076569  403525 main.go:141] libmachine: (addons-469211)   <name>addons-469211</name>
	I0731 18:16:59.076578  403525 main.go:141] libmachine: (addons-469211)   <memory unit='MiB'>4000</memory>
	I0731 18:16:59.076585  403525 main.go:141] libmachine: (addons-469211)   <vcpu>2</vcpu>
	I0731 18:16:59.076592  403525 main.go:141] libmachine: (addons-469211)   <features>
	I0731 18:16:59.076597  403525 main.go:141] libmachine: (addons-469211)     <acpi/>
	I0731 18:16:59.076602  403525 main.go:141] libmachine: (addons-469211)     <apic/>
	I0731 18:16:59.076607  403525 main.go:141] libmachine: (addons-469211)     <pae/>
	I0731 18:16:59.076613  403525 main.go:141] libmachine: (addons-469211)     
	I0731 18:16:59.076618  403525 main.go:141] libmachine: (addons-469211)   </features>
	I0731 18:16:59.076623  403525 main.go:141] libmachine: (addons-469211)   <cpu mode='host-passthrough'>
	I0731 18:16:59.076637  403525 main.go:141] libmachine: (addons-469211)   
	I0731 18:16:59.076647  403525 main.go:141] libmachine: (addons-469211)   </cpu>
	I0731 18:16:59.076657  403525 main.go:141] libmachine: (addons-469211)   <os>
	I0731 18:16:59.076669  403525 main.go:141] libmachine: (addons-469211)     <type>hvm</type>
	I0731 18:16:59.076683  403525 main.go:141] libmachine: (addons-469211)     <boot dev='cdrom'/>
	I0731 18:16:59.076696  403525 main.go:141] libmachine: (addons-469211)     <boot dev='hd'/>
	I0731 18:16:59.076704  403525 main.go:141] libmachine: (addons-469211)     <bootmenu enable='no'/>
	I0731 18:16:59.076714  403525 main.go:141] libmachine: (addons-469211)   </os>
	I0731 18:16:59.076720  403525 main.go:141] libmachine: (addons-469211)   <devices>
	I0731 18:16:59.076727  403525 main.go:141] libmachine: (addons-469211)     <disk type='file' device='cdrom'>
	I0731 18:16:59.076737  403525 main.go:141] libmachine: (addons-469211)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/boot2docker.iso'/>
	I0731 18:16:59.076744  403525 main.go:141] libmachine: (addons-469211)       <target dev='hdc' bus='scsi'/>
	I0731 18:16:59.076753  403525 main.go:141] libmachine: (addons-469211)       <readonly/>
	I0731 18:16:59.076765  403525 main.go:141] libmachine: (addons-469211)     </disk>
	I0731 18:16:59.076777  403525 main.go:141] libmachine: (addons-469211)     <disk type='file' device='disk'>
	I0731 18:16:59.076811  403525 main.go:141] libmachine: (addons-469211)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:16:59.076824  403525 main.go:141] libmachine: (addons-469211)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/addons-469211.rawdisk'/>
	I0731 18:16:59.076830  403525 main.go:141] libmachine: (addons-469211)       <target dev='hda' bus='virtio'/>
	I0731 18:16:59.076837  403525 main.go:141] libmachine: (addons-469211)     </disk>
	I0731 18:16:59.076849  403525 main.go:141] libmachine: (addons-469211)     <interface type='network'>
	I0731 18:16:59.076864  403525 main.go:141] libmachine: (addons-469211)       <source network='mk-addons-469211'/>
	I0731 18:16:59.076875  403525 main.go:141] libmachine: (addons-469211)       <model type='virtio'/>
	I0731 18:16:59.076885  403525 main.go:141] libmachine: (addons-469211)     </interface>
	I0731 18:16:59.076908  403525 main.go:141] libmachine: (addons-469211)     <interface type='network'>
	I0731 18:16:59.076920  403525 main.go:141] libmachine: (addons-469211)       <source network='default'/>
	I0731 18:16:59.076933  403525 main.go:141] libmachine: (addons-469211)       <model type='virtio'/>
	I0731 18:16:59.076944  403525 main.go:141] libmachine: (addons-469211)     </interface>
	I0731 18:16:59.076955  403525 main.go:141] libmachine: (addons-469211)     <serial type='pty'>
	I0731 18:16:59.076966  403525 main.go:141] libmachine: (addons-469211)       <target port='0'/>
	I0731 18:16:59.076974  403525 main.go:141] libmachine: (addons-469211)     </serial>
	I0731 18:16:59.076987  403525 main.go:141] libmachine: (addons-469211)     <console type='pty'>
	I0731 18:16:59.077000  403525 main.go:141] libmachine: (addons-469211)       <target type='serial' port='0'/>
	I0731 18:16:59.077009  403525 main.go:141] libmachine: (addons-469211)     </console>
	I0731 18:16:59.077026  403525 main.go:141] libmachine: (addons-469211)     <rng model='virtio'>
	I0731 18:16:59.077049  403525 main.go:141] libmachine: (addons-469211)       <backend model='random'>/dev/random</backend>
	I0731 18:16:59.077073  403525 main.go:141] libmachine: (addons-469211)     </rng>
	I0731 18:16:59.077090  403525 main.go:141] libmachine: (addons-469211)     
	I0731 18:16:59.077104  403525 main.go:141] libmachine: (addons-469211)     
	I0731 18:16:59.077119  403525 main.go:141] libmachine: (addons-469211)   </devices>
	I0731 18:16:59.077130  403525 main.go:141] libmachine: (addons-469211) </domain>
	I0731 18:16:59.077139  403525 main.go:141] libmachine: (addons-469211) 
	I0731 18:16:59.083139  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:6a:41:e1 in network default
	I0731 18:16:59.083690  403525 main.go:141] libmachine: (addons-469211) Ensuring networks are active...
	I0731 18:16:59.083708  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:16:59.084301  403525 main.go:141] libmachine: (addons-469211) Ensuring network default is active
	I0731 18:16:59.084599  403525 main.go:141] libmachine: (addons-469211) Ensuring network mk-addons-469211 is active
	I0731 18:16:59.085079  403525 main.go:141] libmachine: (addons-469211) Getting domain xml...
	I0731 18:16:59.085694  403525 main.go:141] libmachine: (addons-469211) Creating domain...
	I0731 18:17:00.522102  403525 main.go:141] libmachine: (addons-469211) Waiting to get IP...
	I0731 18:17:00.522961  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:00.523409  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:00.523435  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:00.523394  403547 retry.go:31] will retry after 255.733272ms: waiting for machine to come up
	I0731 18:17:00.781047  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:00.781769  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:00.781798  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:00.781710  403547 retry.go:31] will retry after 348.448819ms: waiting for machine to come up
	I0731 18:17:01.131221  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:01.131642  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:01.131674  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:01.131583  403547 retry.go:31] will retry after 470.018453ms: waiting for machine to come up
	I0731 18:17:01.603271  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:01.603761  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:01.603794  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:01.603707  403547 retry.go:31] will retry after 465.247494ms: waiting for machine to come up
	I0731 18:17:02.070353  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:02.070784  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:02.070868  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:02.070764  403547 retry.go:31] will retry after 524.894257ms: waiting for machine to come up
	I0731 18:17:02.597587  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:02.597993  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:02.598023  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:02.597937  403547 retry.go:31] will retry after 918.935628ms: waiting for machine to come up
	I0731 18:17:03.518773  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:03.519126  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:03.519179  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:03.519075  403547 retry.go:31] will retry after 906.928454ms: waiting for machine to come up
	I0731 18:17:04.427174  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:04.427537  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:04.427568  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:04.427472  403547 retry.go:31] will retry after 1.311363775s: waiting for machine to come up
	I0731 18:17:05.740966  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:05.741455  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:05.741488  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:05.741360  403547 retry.go:31] will retry after 1.50986554s: waiting for machine to come up
	I0731 18:17:07.252971  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:07.253336  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:07.253369  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:07.253269  403547 retry.go:31] will retry after 1.760852072s: waiting for machine to come up
	I0731 18:17:09.016358  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:09.016787  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:09.016821  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:09.016720  403547 retry.go:31] will retry after 1.866108056s: waiting for machine to come up
	I0731 18:17:10.885962  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:10.886352  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:10.886379  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:10.886339  403547 retry.go:31] will retry after 3.530188806s: waiting for machine to come up
	I0731 18:17:14.418449  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:14.418895  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:14.418926  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:14.418858  403547 retry.go:31] will retry after 3.789908324s: waiting for machine to come up
	I0731 18:17:18.210719  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:18.211150  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find current IP address of domain addons-469211 in network mk-addons-469211
	I0731 18:17:18.211176  403525 main.go:141] libmachine: (addons-469211) DBG | I0731 18:17:18.211114  403547 retry.go:31] will retry after 4.872628016s: waiting for machine to come up
	I0731 18:17:23.086081  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.086521  403525 main.go:141] libmachine: (addons-469211) Found IP for machine: 192.168.39.187
	I0731 18:17:23.086548  403525 main.go:141] libmachine: (addons-469211) Reserving static IP address...
	I0731 18:17:23.086577  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has current primary IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.086962  403525 main.go:141] libmachine: (addons-469211) DBG | unable to find host DHCP lease matching {name: "addons-469211", mac: "52:54:00:62:76:b3", ip: "192.168.39.187"} in network mk-addons-469211
	I0731 18:17:23.159724  403525 main.go:141] libmachine: (addons-469211) DBG | Getting to WaitForSSH function...
	I0731 18:17:23.159761  403525 main.go:141] libmachine: (addons-469211) Reserved static IP address: 192.168.39.187
	I0731 18:17:23.159776  403525 main.go:141] libmachine: (addons-469211) Waiting for SSH to be available...
	I0731 18:17:23.162036  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.162481  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.162517  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.162627  403525 main.go:141] libmachine: (addons-469211) DBG | Using SSH client type: external
	I0731 18:17:23.162656  403525 main.go:141] libmachine: (addons-469211) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa (-rw-------)
	I0731 18:17:23.162687  403525 main.go:141] libmachine: (addons-469211) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:17:23.162703  403525 main.go:141] libmachine: (addons-469211) DBG | About to run SSH command:
	I0731 18:17:23.162717  403525 main.go:141] libmachine: (addons-469211) DBG | exit 0
	I0731 18:17:23.289189  403525 main.go:141] libmachine: (addons-469211) DBG | SSH cmd err, output: <nil>: 
	I0731 18:17:23.289505  403525 main.go:141] libmachine: (addons-469211) KVM machine creation complete!
	I0731 18:17:23.289795  403525 main.go:141] libmachine: (addons-469211) Calling .GetConfigRaw
	I0731 18:17:23.290474  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:23.290690  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:23.290897  403525 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:17:23.290915  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:23.292447  403525 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:17:23.292473  403525 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:17:23.292487  403525 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:17:23.292495  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.294621  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.294946  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.294976  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.295079  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.295275  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.295456  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.295612  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.295794  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.296019  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.296032  403525 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:17:23.404003  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:17:23.404025  403525 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:17:23.404033  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.407001  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.407394  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.407428  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.407643  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.407849  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.408067  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.408200  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.408363  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.408588  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.408604  403525 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:17:23.517404  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:17:23.517516  403525 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:17:23.517531  403525 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:17:23.517544  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:17:23.517807  403525 buildroot.go:166] provisioning hostname "addons-469211"
	I0731 18:17:23.517839  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:17:23.518073  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.520809  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.521240  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.521271  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.521547  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.521761  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.521955  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.522094  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.522299  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.522537  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.522554  403525 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-469211 && echo "addons-469211" | sudo tee /etc/hostname
	I0731 18:17:23.646450  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-469211
	
	I0731 18:17:23.646475  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.649397  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.649705  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.649730  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.649905  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.650137  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.650293  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.650419  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.650555  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:23.650735  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:23.650757  403525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-469211' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-469211/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-469211' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:17:23.770332  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:17:23.770388  403525 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:17:23.770422  403525 buildroot.go:174] setting up certificates
	I0731 18:17:23.770443  403525 provision.go:84] configureAuth start
	I0731 18:17:23.770459  403525 main.go:141] libmachine: (addons-469211) Calling .GetMachineName
	I0731 18:17:23.770736  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:23.773283  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.773714  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.773744  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.773915  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.776287  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.776684  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.776710  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.776897  403525 provision.go:143] copyHostCerts
	I0731 18:17:23.776979  403525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:17:23.777128  403525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:17:23.777216  403525 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:17:23.777290  403525 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.addons-469211 san=[127.0.0.1 192.168.39.187 addons-469211 localhost minikube]
	I0731 18:17:23.893530  403525 provision.go:177] copyRemoteCerts
	I0731 18:17:23.893600  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:17:23.893635  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:23.896263  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.896622  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:23.896649  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:23.896870  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:23.897078  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:23.897234  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:23.897366  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:23.984714  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:17:24.011612  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 18:17:24.037480  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:17:24.063900  403525 provision.go:87] duration metric: took 293.437893ms to configureAuth
	I0731 18:17:24.063933  403525 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:17:24.064154  403525 config.go:182] Loaded profile config "addons-469211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:17:24.064251  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.066992  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.067352  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.067389  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.067609  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.067846  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.068017  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.068201  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.068359  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:24.068564  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:24.068584  403525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:17:24.346387  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:17:24.346420  403525 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:17:24.346431  403525 main.go:141] libmachine: (addons-469211) Calling .GetURL
	I0731 18:17:24.347933  403525 main.go:141] libmachine: (addons-469211) DBG | Using libvirt version 6000000
	I0731 18:17:24.350145  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.350486  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.350518  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.350650  403525 main.go:141] libmachine: Docker is up and running!
	I0731 18:17:24.350685  403525 main.go:141] libmachine: Reticulating splines...
	I0731 18:17:24.350694  403525 client.go:171] duration metric: took 26.399860648s to LocalClient.Create
	I0731 18:17:24.350723  403525 start.go:167] duration metric: took 26.399928962s to libmachine.API.Create "addons-469211"
	I0731 18:17:24.350738  403525 start.go:293] postStartSetup for "addons-469211" (driver="kvm2")
	I0731 18:17:24.350753  403525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:17:24.350778  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.351056  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:17:24.351088  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.353363  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.353717  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.353736  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.353900  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.354096  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.354265  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.354445  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:24.440250  403525 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:17:24.445334  403525 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:17:24.445390  403525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:17:24.445486  403525 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:17:24.445517  403525 start.go:296] duration metric: took 94.769189ms for postStartSetup
	I0731 18:17:24.445564  403525 main.go:141] libmachine: (addons-469211) Calling .GetConfigRaw
	I0731 18:17:24.446215  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:24.448911  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.449261  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.449291  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.449523  403525 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/config.json ...
	I0731 18:17:24.449742  403525 start.go:128] duration metric: took 26.518040171s to createHost
	I0731 18:17:24.449767  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.452287  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.452604  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.452637  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.452766  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.452964  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.453130  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.453280  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.453412  403525 main.go:141] libmachine: Using SSH client type: native
	I0731 18:17:24.453572  403525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0731 18:17:24.453582  403525 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:17:24.561454  403525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722449844.540035441
	
	I0731 18:17:24.561482  403525 fix.go:216] guest clock: 1722449844.540035441
	I0731 18:17:24.561490  403525 fix.go:229] Guest: 2024-07-31 18:17:24.540035441 +0000 UTC Remote: 2024-07-31 18:17:24.449755382 +0000 UTC m=+26.623671337 (delta=90.280059ms)
	I0731 18:17:24.561531  403525 fix.go:200] guest clock delta is within tolerance: 90.280059ms
	I0731 18:17:24.561541  403525 start.go:83] releasing machines lock for "addons-469211", held for 26.629924175s
	I0731 18:17:24.561573  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.562035  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:24.565002  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.565288  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.565309  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.565549  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.566008  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.566226  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:24.566348  403525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:17:24.566401  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.566479  403525 ssh_runner.go:195] Run: cat /version.json
	I0731 18:17:24.566508  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:24.569251  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569301  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569690  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.569717  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569751  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:24.569769  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:24.569847  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.570005  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.570106  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:24.570170  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.570234  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:24.570312  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:24.570396  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:24.570558  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:24.671372  403525 ssh_runner.go:195] Run: systemctl --version
	I0731 18:17:24.677631  403525 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:17:24.845020  403525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:17:24.851012  403525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:17:24.851107  403525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:17:24.867371  403525 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:17:24.867405  403525 start.go:495] detecting cgroup driver to use...
	I0731 18:17:24.867530  403525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:17:24.884978  403525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:17:24.899614  403525 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:17:24.899697  403525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:17:24.914472  403525 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:17:24.928564  403525 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:17:25.044847  403525 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:17:25.207671  403525 docker.go:233] disabling docker service ...
	I0731 18:17:25.207770  403525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:17:25.222696  403525 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:17:25.236238  403525 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:17:25.360802  403525 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:17:25.483581  403525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:17:25.498471  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:17:25.517487  403525 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:17:25.517569  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.529166  403525 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:17:25.529252  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.540411  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.551586  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.562618  403525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:17:25.574388  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.585608  403525 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.603234  403525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:17:25.615010  403525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:17:25.625038  403525 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:17:25.625133  403525 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:17:25.639603  403525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:17:25.649980  403525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:17:25.766857  403525 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:17:25.903790  403525 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:17:25.903893  403525 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:17:25.908981  403525 start.go:563] Will wait 60s for crictl version
	I0731 18:17:25.909074  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:17:25.913349  403525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:17:25.954241  403525 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:17:25.954384  403525 ssh_runner.go:195] Run: crio --version
	I0731 18:17:25.983832  403525 ssh_runner.go:195] Run: crio --version
	I0731 18:17:26.013808  403525 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:17:26.015025  403525 main.go:141] libmachine: (addons-469211) Calling .GetIP
	I0731 18:17:26.018094  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:26.018446  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:26.018472  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:26.018734  403525 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:17:26.023287  403525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:17:26.036267  403525 kubeadm.go:883] updating cluster {Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:17:26.036438  403525 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:17:26.036496  403525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:17:26.070571  403525 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:17:26.070686  403525 ssh_runner.go:195] Run: which lz4
	I0731 18:17:26.075110  403525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:17:26.079436  403525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:17:26.079470  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:17:27.416664  403525 crio.go:462] duration metric: took 1.341590497s to copy over tarball
	I0731 18:17:27.416745  403525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:17:29.704086  403525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.287304873s)
	I0731 18:17:29.704129  403525 crio.go:469] duration metric: took 2.287433068s to extract the tarball
	I0731 18:17:29.704141  403525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:17:29.742365  403525 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:17:29.786765  403525 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:17:29.786791  403525 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:17:29.786800  403525 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.30.3 crio true true} ...
	I0731 18:17:29.786973  403525 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-469211 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:17:29.787063  403525 ssh_runner.go:195] Run: crio config
	I0731 18:17:29.832104  403525 cni.go:84] Creating CNI manager for ""
	I0731 18:17:29.832129  403525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:17:29.832152  403525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:17:29.832175  403525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-469211 NodeName:addons-469211 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:17:29.832331  403525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-469211"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:17:29.832442  403525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:17:29.842793  403525 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:17:29.842918  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 18:17:29.852751  403525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 18:17:29.869448  403525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:17:29.886357  403525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 18:17:29.903459  403525 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0731 18:17:29.907575  403525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:17:29.921718  403525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:17:30.055968  403525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:17:30.083106  403525 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211 for IP: 192.168.39.187
	I0731 18:17:30.083136  403525 certs.go:194] generating shared ca certs ...
	I0731 18:17:30.083159  403525 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.083353  403525 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:17:30.174620  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt ...
	I0731 18:17:30.174653  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt: {Name:mk708a6cde81dea79b45116658d3ff1bc40d565c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.174821  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key ...
	I0731 18:17:30.174832  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key: {Name:mka0b6105bb80f7ef14e64fd9743c2f620c475d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.174907  403525 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:17:30.242518  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt ...
	I0731 18:17:30.242549  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt: {Name:mk43214b6f02650cbebf8422c755c00b188077ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.242712  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key ...
	I0731 18:17:30.242722  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key: {Name:mk8dd5a5172815e6b1d2fd70a7a880625c4287a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.242791  403525 certs.go:256] generating profile certs ...
	I0731 18:17:30.242907  403525 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.key
	I0731 18:17:30.242923  403525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt with IP's: []
	I0731 18:17:30.432606  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt ...
	I0731 18:17:30.432648  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: {Name:mk11b3ded6f747bee8843390ec5f205bc4e0af1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.432847  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.key ...
	I0731 18:17:30.432860  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.key: {Name:mkd8065150e0b6b0d8b07ceca4d4ab2de2142b3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.432948  403525 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d
	I0731 18:17:30.432969  403525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187]
	I0731 18:17:30.556658  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d ...
	I0731 18:17:30.556695  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d: {Name:mk06c796e440bd2a0d06b4f549d2107dbdee4829 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.556889  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d ...
	I0731 18:17:30.556905  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d: {Name:mk20996de13e3c0b2ca71f44ce2cb2586353edaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.556984  403525 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt.0538625d -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt
	I0731 18:17:30.557062  403525 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key.0538625d -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key
	I0731 18:17:30.557114  403525 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key
	I0731 18:17:30.557167  403525 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt with IP's: []
	I0731 18:17:30.768352  403525 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt ...
	I0731 18:17:30.768401  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt: {Name:mka013489cf097b934dd44f9e58f88346af08b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.768597  403525 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key ...
	I0731 18:17:30.768614  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key: {Name:mkb51ce469763d417db85296e1ba2b76097f6efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:30.768805  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:17:30.768845  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:17:30.768872  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:17:30.768899  403525 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:17:30.769574  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:17:30.805498  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:17:30.849106  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:17:30.878265  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:17:30.902555  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 18:17:30.928658  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:17:30.952770  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:17:30.976774  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 18:17:31.000983  403525 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:17:31.024988  403525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:17:31.042033  403525 ssh_runner.go:195] Run: openssl version
	I0731 18:17:31.048775  403525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:17:31.060012  403525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:17:31.064644  403525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:17:31.064729  403525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:17:31.070691  403525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:17:31.082265  403525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:17:31.086488  403525 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:17:31.086582  403525 kubeadm.go:392] StartCluster: {Name:addons-469211 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-469211 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:17:31.086665  403525 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:17:31.086706  403525 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:17:31.124907  403525 cri.go:89] found id: ""
	I0731 18:17:31.124997  403525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:17:31.135123  403525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:17:31.144992  403525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:17:31.154689  403525 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:17:31.154707  403525 kubeadm.go:157] found existing configuration files:
	
	I0731 18:17:31.154752  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:17:31.163822  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:17:31.163969  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:17:31.174090  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:17:31.183495  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:17:31.183552  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:17:31.193244  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:17:31.202228  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:17:31.202282  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:17:31.211935  403525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:17:31.221121  403525 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:17:31.221171  403525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:17:31.230754  403525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:17:31.295302  403525 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:17:31.295927  403525 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:17:31.445698  403525 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:17:31.445792  403525 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:17:31.445876  403525 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:17:31.655085  403525 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:17:31.798689  403525 out.go:204]   - Generating certificates and keys ...
	I0731 18:17:31.798822  403525 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:17:31.798940  403525 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:17:31.940458  403525 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 18:17:32.189616  403525 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 18:17:32.397725  403525 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 18:17:32.557690  403525 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 18:17:32.642631  403525 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 18:17:32.642800  403525 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-469211 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0731 18:17:32.756548  403525 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 18:17:32.756778  403525 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-469211 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0731 18:17:32.880514  403525 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 18:17:33.145182  403525 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 18:17:33.383751  403525 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 18:17:33.384050  403525 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:17:33.619447  403525 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:17:33.685479  403525 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:17:33.835984  403525 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:17:34.108804  403525 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:17:34.212350  403525 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:17:34.214162  403525 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:17:34.217122  403525 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:17:34.219052  403525 out.go:204]   - Booting up control plane ...
	I0731 18:17:34.219185  403525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:17:34.219258  403525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:17:34.219355  403525 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:17:34.235090  403525 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:17:34.237164  403525 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:17:34.237428  403525 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:17:34.362564  403525 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:17:34.362657  403525 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:17:34.862904  403525 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56592ms
	I0731 18:17:34.863002  403525 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:17:39.863307  403525 kubeadm.go:310] [api-check] The API server is healthy after 5.001685671s
	I0731 18:17:39.874044  403525 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:17:39.897593  403525 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:17:39.936622  403525 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:17:39.936830  403525 kubeadm.go:310] [mark-control-plane] Marking the node addons-469211 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:17:39.950364  403525 kubeadm.go:310] [bootstrap-token] Using token: i5tlvs.bruakb7fr5op4n2g
	I0731 18:17:39.951744  403525 out.go:204]   - Configuring RBAC rules ...
	I0731 18:17:39.951866  403525 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:17:39.961418  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:17:39.972463  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:17:39.982890  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:17:39.994865  403525 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:17:40.010766  403525 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:17:40.268746  403525 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:17:40.716132  403525 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:17:41.272902  403525 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:17:41.273790  403525 kubeadm.go:310] 
	I0731 18:17:41.273858  403525 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:17:41.273896  403525 kubeadm.go:310] 
	I0731 18:17:41.274039  403525 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:17:41.274060  403525 kubeadm.go:310] 
	I0731 18:17:41.274110  403525 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:17:41.274202  403525 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:17:41.274293  403525 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:17:41.274305  403525 kubeadm.go:310] 
	I0731 18:17:41.274386  403525 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:17:41.274398  403525 kubeadm.go:310] 
	I0731 18:17:41.274458  403525 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:17:41.274466  403525 kubeadm.go:310] 
	I0731 18:17:41.274534  403525 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:17:41.274620  403525 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:17:41.274718  403525 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:17:41.274728  403525 kubeadm.go:310] 
	I0731 18:17:41.274830  403525 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:17:41.274942  403525 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:17:41.274951  403525 kubeadm.go:310] 
	I0731 18:17:41.275048  403525 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token i5tlvs.bruakb7fr5op4n2g \
	I0731 18:17:41.275174  403525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd \
	I0731 18:17:41.275226  403525 kubeadm.go:310] 	--control-plane 
	I0731 18:17:41.275235  403525 kubeadm.go:310] 
	I0731 18:17:41.275358  403525 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:17:41.275366  403525 kubeadm.go:310] 
	I0731 18:17:41.275484  403525 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token i5tlvs.bruakb7fr5op4n2g \
	I0731 18:17:41.275615  403525 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd 
	I0731 18:17:41.276087  403525 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:17:41.276122  403525 cni.go:84] Creating CNI manager for ""
	I0731 18:17:41.276140  403525 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:17:41.277953  403525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 18:17:41.279741  403525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 18:17:41.292101  403525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 18:17:41.311205  403525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:17:41.311347  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:41.311397  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-469211 minikube.k8s.io/updated_at=2024_07_31T18_17_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=addons-469211 minikube.k8s.io/primary=true
	I0731 18:17:41.344769  403525 ops.go:34] apiserver oom_adj: -16
	I0731 18:17:41.455531  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:41.955811  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:42.456250  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:42.955987  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:43.455945  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:43.955575  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:44.456586  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:44.956339  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:45.456492  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:45.956630  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:46.455849  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:46.956227  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:47.455778  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:47.955635  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:48.455651  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:48.955582  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:49.456058  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:49.955648  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:50.456347  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:50.956034  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:51.455684  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:51.955959  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:52.455959  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:52.956154  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:53.455969  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:53.956513  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:54.456366  403525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:17:54.540406  403525 kubeadm.go:1113] duration metric: took 13.229139346s to wait for elevateKubeSystemPrivileges
	I0731 18:17:54.540445  403525 kubeadm.go:394] duration metric: took 23.453870858s to StartCluster
	I0731 18:17:54.540478  403525 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:54.540617  403525 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:17:54.541011  403525 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:17:54.541208  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 18:17:54.541241  403525 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:17:54.541313  403525 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 18:17:54.541445  403525 addons.go:69] Setting yakd=true in profile "addons-469211"
	I0731 18:17:54.541492  403525 addons.go:69] Setting volcano=true in profile "addons-469211"
	I0731 18:17:54.541508  403525 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-469211"
	I0731 18:17:54.541486  403525 addons.go:69] Setting ingress=true in profile "addons-469211"
	I0731 18:17:54.541521  403525 addons.go:234] Setting addon volcano=true in "addons-469211"
	I0731 18:17:54.541523  403525 addons.go:69] Setting volumesnapshots=true in profile "addons-469211"
	I0731 18:17:54.541525  403525 addons.go:69] Setting registry=true in profile "addons-469211"
	I0731 18:17:54.541530  403525 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-469211"
	I0731 18:17:54.541538  403525 addons.go:234] Setting addon ingress=true in "addons-469211"
	I0731 18:17:54.541541  403525 addons.go:234] Setting addon volumesnapshots=true in "addons-469211"
	I0731 18:17:54.541549  403525 addons.go:234] Setting addon registry=true in "addons-469211"
	I0731 18:17:54.541568  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541568  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541575  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541494  403525 config.go:182] Loaded profile config "addons-469211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:17:54.541568  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541456  403525 addons.go:69] Setting cloud-spanner=true in profile "addons-469211"
	I0731 18:17:54.541690  403525 addons.go:234] Setting addon cloud-spanner=true in "addons-469211"
	I0731 18:17:54.541463  403525 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-469211"
	I0731 18:17:54.541713  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541742  403525 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-469211"
	I0731 18:17:54.541780  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541467  403525 addons.go:69] Setting default-storageclass=true in profile "addons-469211"
	I0731 18:17:54.541880  403525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-469211"
	I0731 18:17:54.542082  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542121  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542144  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542144  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.541450  403525 addons.go:69] Setting inspektor-gadget=true in profile "addons-469211"
	I0731 18:17:54.542160  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542174  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542183  403525 addons.go:234] Setting addon inspektor-gadget=true in "addons-469211"
	I0731 18:17:54.542187  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542206  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541479  403525 addons.go:69] Setting ingress-dns=true in profile "addons-469211"
	I0731 18:17:54.542177  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542249  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542259  403525 addons.go:234] Setting addon ingress-dns=true in "addons-469211"
	I0731 18:17:54.541514  403525 addons.go:234] Setting addon yakd=true in "addons-469211"
	I0731 18:17:54.541448  403525 addons.go:69] Setting gcp-auth=true in profile "addons-469211"
	I0731 18:17:54.542271  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.541481  403525 addons.go:69] Setting storage-provisioner=true in profile "addons-469211"
	I0731 18:17:54.542286  403525 mustload.go:65] Loading cluster: addons-469211
	I0731 18:17:54.541516  403525 addons.go:69] Setting metrics-server=true in profile "addons-469211"
	I0731 18:17:54.542312  403525 addons.go:234] Setting addon metrics-server=true in "addons-469211"
	I0731 18:17:54.542252  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.541487  403525 addons.go:69] Setting helm-tiller=true in profile "addons-469211"
	I0731 18:17:54.542338  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542357  403525 addons.go:234] Setting addon helm-tiller=true in "addons-469211"
	I0731 18:17:54.542314  403525 addons.go:234] Setting addon storage-provisioner=true in "addons-469211"
	I0731 18:17:54.541570  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.541510  403525 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-469211"
	I0731 18:17:54.542500  403525 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-469211"
	I0731 18:17:54.542555  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542563  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542570  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542580  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542599  403525 config.go:182] Loaded profile config "addons-469211": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:17:54.542681  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542689  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542698  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542747  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542844  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542869  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.542898  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542939  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.542964  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.542988  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.543017  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.543033  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.543105  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.543119  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.543196  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.550666  403525 out.go:177] * Verifying Kubernetes components...
	I0731 18:17:54.552320  403525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:17:54.558127  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44897
	I0731 18:17:54.558720  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.559194  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0731 18:17:54.559305  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.559327  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.559549  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.559729  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.560340  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.560385  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.560705  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.560721  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.561137  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.561717  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.561758  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.561938  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
	I0731 18:17:54.564910  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.564939  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.564956  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.564999  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.565023  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0731 18:17:54.569160  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.569214  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.569705  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.570176  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.570460  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.570485  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.570745  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.570763  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.570851  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.571406  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.571430  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.571603  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.572165  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.572210  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.589536  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34977
	I0731 18:17:54.590071  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.590611  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.590633  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.590952  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.591163  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.591742  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0731 18:17:54.592236  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.592804  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.592821  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.595297  403525 addons.go:234] Setting addon default-storageclass=true in "addons-469211"
	I0731 18:17:54.595343  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.595720  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.595752  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.596154  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.596730  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.596772  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.600711  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37117
	I0731 18:17:54.601242  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.601779  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.601800  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.601819  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0731 18:17:54.602133  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.602257  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.602876  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.602917  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.603528  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.603557  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.603972  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.604218  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.606223  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.606621  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.606655  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.610819  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38123
	I0731 18:17:54.611240  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.611861  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.611882  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.612322  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.612590  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.614204  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0731 18:17:54.614408  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34233
	I0731 18:17:54.614798  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.614923  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.615451  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.615469  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.615603  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.615613  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.615791  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.616176  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.616222  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.617077  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.617128  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.617668  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.617698  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.619084  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 18:17:54.620602  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 18:17:54.621977  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 18:17:54.622594  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0731 18:17:54.623132  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.623558  403525 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 18:17:54.623581  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 18:17:54.623602  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.623724  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.623743  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.625123  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.625727  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.625767  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.626554  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0731 18:17:54.628845  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0731 18:17:54.628981  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37069
	I0731 18:17:54.629077  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.629137  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.629167  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.629185  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.629300  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.629558  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.629626  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.629712  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.630114  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.630129  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.630416  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0731 18:17:54.630578  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.631035  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.631482  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.631501  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.632138  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.632170  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.632593  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.632808  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.633718  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.633921  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.634445  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.634462  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.634519  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.635114  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.635133  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.635641  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.636101  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.636551  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 18:17:54.637230  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.637777  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 18:17:54.637798  403525 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 18:17:54.637818  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.638363  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.638408  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.639264  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41271
	I0731 18:17:54.639815  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.640028  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.640442  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:17:54.640466  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:17:54.642676  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.642703  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.642732  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.642748  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41521
	I0731 18:17:54.642750  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.642770  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:17:54.642797  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:17:54.642805  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:17:54.642814  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:17:54.642821  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:17:54.643130  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:17:54.643161  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.643205  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:17:54.643213  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 18:17:54.643316  403525 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 18:17:54.643484  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.643652  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0731 18:17:54.643884  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.643899  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.644065  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.644073  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.644080  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.644763  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.645067  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.645157  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.645255  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.645489  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.645778  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.645795  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.646712  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.647013  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.647043  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.647239  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.647436  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.648910  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.649454  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 18:17:54.650628  403525 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 18:17:54.652105  403525 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 18:17:54.652128  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 18:17:54.652163  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.652239  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 18:17:54.653340  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0731 18:17:54.653547  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45137
	I0731 18:17:54.654008  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.654108  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.654931  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 18:17:54.654955  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.654973  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.655120  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.655132  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.655558  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I0731 18:17:54.655574  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.655623  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.655945  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.656572  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.656590  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.657261  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.657307  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.657524  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.658243  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.658457  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.658696  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.658914  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0731 18:17:54.658953  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 18:17:54.659301  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.659324  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.659638  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.659723  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.659797  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.659938  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.660100  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.660549  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.660563  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.661042  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.661324  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.661486  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 18:17:54.662699  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 18:17:54.663693  403525 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-469211"
	I0731 18:17:54.663737  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:17:54.664091  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.664132  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.664353  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.664872  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 18:17:54.665341  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.665907  403525 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 18:17:54.666655  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38743
	I0731 18:17:54.666913  403525 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 18:17:54.666919  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 18:17:54.666943  403525 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 18:17:54.666947  403525 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 18:17:54.666964  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.668341  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 18:17:54.668359  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 18:17:54.668484  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.669954  403525 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 18:17:54.670414  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0731 18:17:54.671113  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.671564  403525 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 18:17:54.671581  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 18:17:54.671599  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.671727  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.671880  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.671894  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.672398  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.673075  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.673118  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.673405  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.674005  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.674022  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.674280  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.674407  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.674710  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.674731  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.674776  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.674958  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.675166  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.675235  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.675414  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.675646  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.675677  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.675698  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.675901  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.676211  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.676407  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.676431  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.676442  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.676566  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.676735  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.676753  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.676981  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.677218  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.677460  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.678842  403525 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 18:17:54.680004  403525 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 18:17:54.680017  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 18:17:54.680030  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.683032  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.683641  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.683675  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.683891  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.684050  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.684240  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.684359  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.685573  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36973
	I0731 18:17:54.686143  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.686718  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.686735  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.687153  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.687376  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.688125  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I0731 18:17:54.688600  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.689131  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.689149  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.689504  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.689708  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.689774  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.690341  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I0731 18:17:54.690662  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.691159  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.691181  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.691514  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.691705  403525 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 18:17:54.691713  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.692982  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 18:17:54.693000  403525 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 18:17:54.693020  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.694055  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39893
	I0731 18:17:54.694092  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.694314  403525 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:17:54.694331  403525 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:17:54.694347  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.694407  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.694866  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.695009  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.695039  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.695726  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.695969  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.698018  403525 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 18:17:54.698703  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45827
	I0731 18:17:54.698778  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.698957  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.699125  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.699323  403525 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 18:17:54.699342  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 18:17:54.699354  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.699360  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.699563  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.699593  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.699733  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.699863  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.700076  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.700353  403525 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 18:17:54.700640  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.700665  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.700807  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.701033  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.701319  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.701334  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.701419  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.701552  403525 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 18:17:54.701549  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.701564  403525 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 18:17:54.701589  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.701808  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.702049  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.702668  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.705090  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705110  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705324  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
	I0731 18:17:54.705538  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.705564  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705749  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.705900  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.705920  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.705961  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.706052  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.706068  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.706394  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.706437  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.706467  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.706482  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.706579  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.706700  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.706708  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.706890  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.707020  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.707727  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41199
	I0731 18:17:54.708179  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.708661  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.708705  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.708719  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.708743  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40309
	I0731 18:17:54.709073  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.709350  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.709371  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.709841  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.709872  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.710302  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.710541  403525 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 18:17:54.711062  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:17:54.711098  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:17:54.711437  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.711795  403525 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 18:17:54.711817  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 18:17:54.711835  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.713024  403525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:17:54.714430  403525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:17:54.714451  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:17:54.714470  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.714539  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.714998  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.715020  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.715159  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.715343  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.715475  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.715611  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.717044  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.717398  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.717449  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.717602  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.717751  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.717966  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.718128  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	W0731 18:17:54.729949  403525 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43180->192.168.39.187:22: read: connection reset by peer
	I0731 18:17:54.729979  403525 retry.go:31] will retry after 125.107357ms: ssh: handshake failed: read tcp 192.168.39.1:43180->192.168.39.187:22: read: connection reset by peer
	I0731 18:17:54.744514  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36769
	I0731 18:17:54.745067  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:17:54.745610  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:17:54.745629  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:17:54.745911  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:17:54.746131  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:17:54.747732  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:17:54.749684  403525 out.go:177]   - Using image docker.io/busybox:stable
	I0731 18:17:54.751136  403525 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 18:17:54.752694  403525 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 18:17:54.752712  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 18:17:54.752735  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:17:54.755736  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.756230  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:17:54.756263  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:17:54.756528  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:17:54.756754  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:17:54.756924  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:17:54.757124  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:17:54.994401  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 18:17:55.051797  403525 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 18:17:55.051838  403525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 18:17:55.069661  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 18:17:55.078859  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 18:17:55.078884  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 18:17:55.146885  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 18:17:55.146919  403525 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 18:17:55.214417  403525 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 18:17:55.214445  403525 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 18:17:55.235318  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:17:55.252043  403525 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:17:55.252083  403525 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 18:17:55.293004  403525 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 18:17:55.293039  403525 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 18:17:55.294405  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 18:17:55.300271  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 18:17:55.312649  403525 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 18:17:55.312676  403525 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 18:17:55.352285  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 18:17:55.372808  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 18:17:55.372849  403525 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 18:17:55.378691  403525 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 18:17:55.378722  403525 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 18:17:55.380002  403525 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 18:17:55.380022  403525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 18:17:55.453526  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 18:17:55.453567  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 18:17:55.460502  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:17:55.498328  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 18:17:55.498357  403525 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 18:17:55.511320  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 18:17:55.535486  403525 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 18:17:55.535513  403525 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 18:17:55.556542  403525 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 18:17:55.556571  403525 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 18:17:55.610890  403525 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 18:17:55.610911  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 18:17:55.630872  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 18:17:55.630901  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 18:17:55.636638  403525 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 18:17:55.636669  403525 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 18:17:55.642310  403525 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.089957062s)
	I0731 18:17:55.642329  403525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.10108413s)
	I0731 18:17:55.642384  403525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:17:55.642478  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 18:17:55.801921  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 18:17:55.801956  403525 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 18:17:55.845719  403525 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 18:17:55.845749  403525 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 18:17:55.890515  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 18:17:55.892563  403525 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 18:17:55.892593  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 18:17:56.008807  403525 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 18:17:56.008951  403525 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 18:17:56.014333  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 18:17:56.014362  403525 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 18:17:56.031342  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 18:17:56.038388  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 18:17:56.038415  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 18:17:56.103015  403525 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 18:17:56.103042  403525 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 18:17:56.151491  403525 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 18:17:56.151513  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 18:17:56.213402  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 18:17:56.358794  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 18:17:56.358832  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 18:17:56.459741  403525 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 18:17:56.459773  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 18:17:56.493582  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 18:17:56.770418  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 18:17:56.780669  403525 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 18:17:56.780703  403525 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 18:17:56.966527  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 18:17:56.966564  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 18:17:57.410036  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 18:17:57.410070  403525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 18:17:57.853552  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 18:17:57.853626  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 18:17:58.128245  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 18:17:58.128277  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 18:17:58.434289  403525 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 18:17:58.434320  403525 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 18:17:58.751688  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 18:18:01.720288  403525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 18:18:01.720342  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:18:01.723616  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:01.724047  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:18:01.724083  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:01.724245  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:18:01.724478  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:18:01.724652  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:18:01.724878  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:18:01.992697  403525 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 18:18:02.126711  403525 addons.go:234] Setting addon gcp-auth=true in "addons-469211"
	I0731 18:18:02.126796  403525 host.go:66] Checking if "addons-469211" exists ...
	I0731 18:18:02.127267  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:18:02.127311  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:18:02.143455  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0731 18:18:02.144011  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:18:02.144590  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:18:02.144610  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:18:02.145047  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:18:02.145597  403525 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:18:02.145627  403525 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:18:02.161990  403525 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43697
	I0731 18:18:02.162489  403525 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:18:02.163176  403525 main.go:141] libmachine: Using API Version  1
	I0731 18:18:02.163201  403525 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:18:02.163587  403525 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:18:02.163849  403525 main.go:141] libmachine: (addons-469211) Calling .GetState
	I0731 18:18:02.165501  403525 main.go:141] libmachine: (addons-469211) Calling .DriverName
	I0731 18:18:02.165767  403525 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 18:18:02.165799  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHHostname
	I0731 18:18:02.168800  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:02.169275  403525 main.go:141] libmachine: (addons-469211) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:76:b3", ip: ""} in network mk-addons-469211: {Iface:virbr1 ExpiryTime:2024-07-31 19:17:13 +0000 UTC Type:0 Mac:52:54:00:62:76:b3 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:addons-469211 Clientid:01:52:54:00:62:76:b3}
	I0731 18:18:02.169306  403525 main.go:141] libmachine: (addons-469211) DBG | domain addons-469211 has defined IP address 192.168.39.187 and MAC address 52:54:00:62:76:b3 in network mk-addons-469211
	I0731 18:18:02.169549  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHPort
	I0731 18:18:02.169742  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHKeyPath
	I0731 18:18:02.169929  403525 main.go:141] libmachine: (addons-469211) Calling .GetSSHUsername
	I0731 18:18:02.170074  403525 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/addons-469211/id_rsa Username:docker}
	I0731 18:18:02.783909  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.789464771s)
	I0731 18:18:02.783974  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.714274852s)
	I0731 18:18:02.783988  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784002  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784019  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784033  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784038  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.548680683s)
	I0731 18:18:02.784076  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784100  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784115  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.489678106s)
	I0731 18:18:02.784156  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784166  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784204  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.483894101s)
	I0731 18:18:02.784234  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784247  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784281  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.431963868s)
	I0731 18:18:02.784297  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784307  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784318  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.323796491s)
	I0731 18:18:02.784333  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784341  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784398  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.273055005s)
	I0731 18:18:02.784414  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784423  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784455  403525 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.141954039s)
	I0731 18:18:02.784472  403525 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.142072332s)
	I0731 18:18:02.784479  403525 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 18:18:02.784707  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784722  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784735  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784739  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784744  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784753  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784751  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784775  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784785  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784793  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784795  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784801  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784806  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784810  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784819  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784884  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.784913  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.784921  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.784931  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.784939  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.784994  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785019  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785030  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785049  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785058  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785109  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785164  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785172  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785243  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785287  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785301  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785369  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.894809588s)
	I0731 18:18:02.785391  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785401  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785473  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.754104733s)
	I0731 18:18:02.785487  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785494  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785556  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.57212321s)
	I0731 18:18:02.785568  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785576  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785702  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.29208185s)
	W0731 18:18:02.785730  403525 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 18:18:02.785750  403525 retry.go:31] will retry after 370.884373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 18:18:02.785824  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.015371478s)
	I0731 18:18:02.785840  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785848  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785917  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.785920  403525 node_ready.go:35] waiting up to 6m0s for node "addons-469211" to be "Ready" ...
	I0731 18:18:02.785939  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.785945  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.785952  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.785958  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.785995  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.786013  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.786019  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.786026  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.786032  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.786065  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.786083  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.786089  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787518  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787547  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787558  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.787567  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.787662  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.787683  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.787690  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787698  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787708  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787716  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787894  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.787923  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.787932  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.787941  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.787950  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.788007  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.788081  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.788091  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.788100  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.788108  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.788164  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.788190  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.788198  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.788208  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.788216  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.788268  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.788290  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.788298  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.788308  403525 addons.go:475] Verifying addon ingress=true in "addons-469211"
	I0731 18:18:02.789772  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.789795  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.789800  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.789811  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.789819  403525 addons.go:475] Verifying addon metrics-server=true in "addons-469211"
	I0731 18:18:02.789827  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.789834  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.789891  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.789933  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.789939  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.791991  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.792022  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.792029  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.792255  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.792267  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.792275  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.792285  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.792389  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.792440  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.792456  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.792467  403525 addons.go:475] Verifying addon registry=true in "addons-469211"
	I0731 18:18:02.792615  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.792639  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.793190  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.793216  403525 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-469211 service yakd-dashboard -n yakd-dashboard
	
	I0731 18:18:02.793321  403525 out.go:177] * Verifying ingress addon...
	I0731 18:18:02.794357  403525 out.go:177] * Verifying registry addon...
	I0731 18:18:02.796229  403525 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 18:18:02.796838  403525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 18:18:02.811133  403525 node_ready.go:49] node "addons-469211" has status "Ready":"True"
	I0731 18:18:02.811163  403525 node_ready.go:38] duration metric: took 25.224048ms for node "addons-469211" to be "Ready" ...
	I0731 18:18:02.811177  403525 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:18:02.847144  403525 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 18:18:02.847188  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:02.877381  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.877412  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.877815  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.877836  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 18:18:02.877948  403525 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 18:18:02.897143  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:02.897174  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:02.897525  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:02.897553  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:02.897557  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:02.922855  403525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kh5dt" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:02.942021  403525 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 18:18:02.942046  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:02.984340  403525 pod_ready.go:92] pod "coredns-7db6d8ff4d-kh5dt" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:02.984408  403525 pod_ready.go:81] duration metric: took 61.520217ms for pod "coredns-7db6d8ff4d-kh5dt" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:02.984426  403525 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zc9fz" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.143063  403525 pod_ready.go:92] pod "coredns-7db6d8ff4d-zc9fz" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.143108  403525 pod_ready.go:81] duration metric: took 158.671617ms for pod "coredns-7db6d8ff4d-zc9fz" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.143124  403525 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.156807  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 18:18:03.266537  403525 pod_ready.go:92] pod "etcd-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.266568  403525 pod_ready.go:81] duration metric: took 123.435127ms for pod "etcd-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.266582  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.294773  403525 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-469211" context rescaled to 1 replicas
	I0731 18:18:03.303200  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:03.308753  403525 pod_ready.go:92] pod "kube-apiserver-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.308779  403525 pod_ready.go:81] duration metric: took 42.188541ms for pod "kube-apiserver-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.308791  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.309669  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:03.319617  403525 pod_ready.go:92] pod "kube-controller-manager-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.319652  403525 pod_ready.go:81] duration metric: took 10.85165ms for pod "kube-controller-manager-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.319671  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rmpj2" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.593707  403525 pod_ready.go:92] pod "kube-proxy-rmpj2" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:03.593743  403525 pod_ready.go:81] duration metric: took 274.062498ms for pod "kube-proxy-rmpj2" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.593757  403525 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:03.803381  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:03.808702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:04.000200  403525 pod_ready.go:92] pod "kube-scheduler-addons-469211" in "kube-system" namespace has status "Ready":"True"
	I0731 18:18:04.000224  403525 pod_ready.go:81] duration metric: took 406.459784ms for pod "kube-scheduler-addons-469211" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:04.000236  403525 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace to be "Ready" ...
	I0731 18:18:04.311093  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:04.311240  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:04.359937  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.608188548s)
	I0731 18:18:04.359982  403525 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.194191445s)
	I0731 18:18:04.360004  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:04.360022  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:04.360452  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:04.360486  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:04.360501  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:04.360508  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:04.360516  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:04.360785  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:04.360810  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:04.360823  403525 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-469211"
	I0731 18:18:04.362613  403525 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 18:18:04.362627  403525 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 18:18:04.364156  403525 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 18:18:04.364880  403525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 18:18:04.365363  403525 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 18:18:04.365381  403525 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 18:18:04.399530  403525 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 18:18:04.399551  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:04.517948  403525 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 18:18:04.517973  403525 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 18:18:04.610990  403525 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 18:18:04.611013  403525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 18:18:04.774078  403525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 18:18:04.806616  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:04.806702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:04.871477  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:05.303318  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:05.303596  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:05.370303  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:05.801860  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:05.804914  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:05.871012  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:05.874887  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.718033509s)
	I0731 18:18:05.874950  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:05.874966  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:05.875291  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:05.875318  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:05.875331  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:05.875341  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:05.875584  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:05.875618  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:06.015435  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:06.301985  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:06.303429  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:06.371562  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:06.684627  403525 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.910505894s)
	I0731 18:18:06.684696  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:06.684709  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:06.685054  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:06.685093  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:06.685120  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:06.685149  403525 main.go:141] libmachine: Making call to close driver server
	I0731 18:18:06.685161  403525 main.go:141] libmachine: (addons-469211) Calling .Close
	I0731 18:18:06.685396  403525 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:18:06.685420  403525 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:18:06.685430  403525 main.go:141] libmachine: (addons-469211) DBG | Closing plugin on server side
	I0731 18:18:06.687632  403525 addons.go:475] Verifying addon gcp-auth=true in "addons-469211"
	I0731 18:18:06.689554  403525 out.go:177] * Verifying gcp-auth addon...
	I0731 18:18:06.692053  403525 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 18:18:06.714301  403525 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 18:18:06.714324  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:06.808486  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:06.809025  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:06.871163  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:07.195316  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:07.302004  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:07.302243  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:07.371865  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:07.698872  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:07.801521  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:07.803788  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:07.874336  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:08.206125  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:08.307036  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:08.312053  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:08.417539  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:08.528502  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:08.700020  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:08.808781  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:08.809722  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:08.871711  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:09.195745  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:09.301693  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:09.301864  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:09.371229  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:09.697711  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:09.802572  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:09.803016  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:09.872442  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:10.195911  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:10.301757  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:10.302243  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:10.373340  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:10.696439  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:10.800065  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:10.801649  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:10.872723  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:11.011770  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:11.195907  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:11.303507  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:11.304163  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:11.370491  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:11.696700  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:11.800598  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:11.803390  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:11.871291  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:12.196621  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:12.301092  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:12.303114  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:12.371306  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:12.696117  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:12.802787  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:12.802881  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:12.870987  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:13.198334  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:13.302253  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:13.303927  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:13.638943  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:13.639146  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:13.697681  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:13.802184  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:13.802494  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:13.871760  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:14.196474  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:14.302051  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:14.302180  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:14.383723  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:14.695741  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:14.802232  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:14.802342  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:14.872676  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:15.195750  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:15.301295  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:15.302132  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:15.372205  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:15.696219  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:15.802541  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:15.804245  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:15.873300  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:16.005713  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:16.196251  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:16.302854  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:16.304140  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:16.370725  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:16.696841  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:16.802663  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:16.802716  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:16.872844  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:17.196484  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:17.301931  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:17.304621  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:17.370253  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:17.696024  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:17.801431  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:17.802251  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:17.870606  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:18.006534  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:18.197066  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:18.301261  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:18.303123  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:18.371432  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:18.695712  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:18.801751  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:18.802188  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:18.871914  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:19.255640  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:19.302007  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:19.303910  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:19.371321  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:19.696725  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:19.801106  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:19.802564  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:19.870836  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:20.198108  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:20.302364  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:20.302477  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:20.370718  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:20.508341  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:20.696554  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:20.802749  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:20.802891  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:20.870847  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:21.196342  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:21.302074  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:21.313358  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:21.370074  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:21.698356  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:21.807700  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:21.814238  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:21.870965  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:22.196216  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:22.302051  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:22.303717  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:22.370808  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:22.573250  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:22.966437  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:22.967259  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:22.967363  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:22.969702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:23.196542  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:23.300786  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:23.302014  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:23.370734  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:23.699406  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:23.801988  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:23.803983  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:23.870804  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:24.196530  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:24.302786  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:24.303229  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:24.370300  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:24.696508  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:24.801529  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:24.801599  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:24.870052  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:25.007007  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:25.198429  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:25.303069  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:25.303151  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:25.370763  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:25.695871  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:25.801506  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:25.802153  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:25.874000  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:26.196081  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:26.303440  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:26.304176  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:26.371128  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:26.696013  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:26.801007  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:26.801303  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:26.870626  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:27.007808  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:27.197946  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:27.305692  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:27.306175  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:27.371149  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:27.695657  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:27.800549  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:27.802492  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:27.871163  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:28.195525  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:28.301354  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:28.302417  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:28.370788  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:28.695962  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:28.801261  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:28.803207  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:28.872164  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:29.007844  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:29.196653  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:29.301888  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:29.302606  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:29.371346  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:29.698352  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:29.801435  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:29.802617  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:29.871059  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:30.197381  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:30.304408  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:30.304431  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:30.371102  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:30.696420  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:30.800432  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:30.801814  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:30.870897  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:31.198366  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:31.301838  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:31.303007  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:31.370833  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:31.804460  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:31.807654  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:31.812069  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:31.814627  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:31.871186  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:32.196904  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:32.302278  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:32.302700  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:32.371124  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:32.696802  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:32.802502  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:32.802817  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:32.871238  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:33.196769  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:33.301896  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:33.301991  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:33.372801  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:33.696233  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:33.801286  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:33.802663  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:33.871541  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:34.011018  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:34.196432  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:34.302680  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:34.304795  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:34.371857  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:34.695986  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:34.802919  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:34.803550  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:34.876451  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:35.196863  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:35.302581  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:35.305155  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:35.370858  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:35.696030  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:35.802514  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:35.804332  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:36.319148  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:36.320150  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:36.320423  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:36.322803  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:36.324487  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:36.370699  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:36.695998  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:36.801740  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:36.801804  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:36.870299  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:37.195781  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:37.300760  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:37.302395  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:37.370488  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:37.696634  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:37.800603  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:37.802534  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:37.870994  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:38.196917  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:38.302292  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:38.302425  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:38.373696  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:38.506973  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:38.696810  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:38.801660  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:38.801816  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:38.871142  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:39.196165  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:39.302091  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:39.302477  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:39.370438  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:39.697625  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:39.810231  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:39.810632  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:39.871311  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:40.196752  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:40.301059  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:40.302009  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:40.371313  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:40.696556  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:40.800761  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:40.801648  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:40.870715  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:41.006973  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:41.199009  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:41.301291  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:41.302282  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:41.371714  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:41.696389  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:41.803589  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:41.817440  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:41.883662  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:42.197121  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:42.301678  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:42.302690  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:42.371312  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:42.696115  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:42.801174  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:42.801326  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:42.869907  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:43.197281  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:43.301437  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:43.301591  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:43.370540  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:43.506096  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:43.696417  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:43.802335  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:43.802440  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:43.869863  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:44.198299  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:44.301634  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:44.301776  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:44.372287  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:44.928295  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:44.928337  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:44.928356  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:44.928868  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:45.195965  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:45.302300  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:45.302821  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:45.371946  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:45.506735  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:45.696923  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:45.801573  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:45.802826  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:45.870319  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:46.196550  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:46.300204  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:46.301683  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 18:18:46.370012  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:46.696863  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:46.800514  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:46.801203  403525 kapi.go:107] duration metric: took 44.004362848s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 18:18:46.871148  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:47.195891  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:47.301098  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:47.371404  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:47.698113  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:47.800940  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:47.870834  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:48.006640  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:48.196924  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:48.301707  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:48.371241  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:48.698370  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:48.801525  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:48.870926  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:49.197015  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:49.300712  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:49.370819  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:49.695963  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:49.801659  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:49.872409  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:50.199326  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:50.301586  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:50.371150  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:50.505746  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:50.696091  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:50.801177  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:50.870702  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:51.198757  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:51.300905  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:51.371594  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:51.697447  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:51.801360  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:51.870775  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:52.196773  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:52.300476  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:52.371257  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:52.506616  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:52.696216  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:52.801633  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:52.869714  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:53.195966  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:53.301010  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:53.371766  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:53.695987  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:53.801367  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:53.870593  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:54.196586  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:54.300246  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:54.370918  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:54.887311  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:54.887830  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:54.888408  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:54.888524  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:55.197708  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:55.301704  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:55.372745  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:55.698308  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:55.801384  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:55.871125  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:56.196319  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:56.301427  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:56.371882  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:56.697253  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:56.801071  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:56.870230  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:57.006494  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:57.198055  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:57.300777  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:57.370590  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:57.697513  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:57.800429  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:57.870214  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:58.195986  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:58.301499  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:58.371318  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:58.696829  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:58.801881  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:58.871514  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:59.007358  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:18:59.195792  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:59.300415  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:59.378492  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:18:59.695553  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:18:59.800655  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:18:59.870158  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:00.196347  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:00.302038  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:00.377833  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:00.697585  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:00.800974  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:00.871215  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:01.011863  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:01.200324  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:01.301490  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:01.370987  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:01.698141  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:01.801329  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:01.871443  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:02.197560  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:02.300541  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:02.373272  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:02.704583  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:02.800881  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:02.874099  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:03.197505  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:03.302030  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:03.370659  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:03.506741  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:03.696527  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:03.800413  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:03.871528  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:04.196272  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:04.301375  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:04.370791  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:04.696681  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:04.801990  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:04.871419  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:05.195889  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:05.301551  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:05.371522  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:05.512956  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:05.696556  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:05.800667  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:05.871163  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:06.197610  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:06.300393  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:06.371255  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:06.695961  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:06.801410  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:06.871072  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:07.197448  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:07.302010  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:07.370256  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:07.696291  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:07.801208  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:07.871701  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:08.006619  403525 pod_ready.go:102] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"False"
	I0731 18:19:08.195718  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:08.300958  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:08.371092  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:08.695862  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:08.801540  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:08.871685  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:09.196725  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:09.301146  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:09.371814  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:09.799614  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:09.802397  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:09.876685  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:10.006711  403525 pod_ready.go:92] pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace has status "Ready":"True"
	I0731 18:19:10.006733  403525 pod_ready.go:81] duration metric: took 1m6.00649097s for pod "metrics-server-c59844bb4-h86lf" in "kube-system" namespace to be "Ready" ...
	I0731 18:19:10.006744  403525 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rnrgk" in "kube-system" namespace to be "Ready" ...
	I0731 18:19:10.011719  403525 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-rnrgk" in "kube-system" namespace has status "Ready":"True"
	I0731 18:19:10.011742  403525 pod_ready.go:81] duration metric: took 4.992129ms for pod "nvidia-device-plugin-daemonset-rnrgk" in "kube-system" namespace to be "Ready" ...
	I0731 18:19:10.011766  403525 pod_ready.go:38] duration metric: took 1m7.200575143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:19:10.011784  403525 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:19:10.011887  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:19:10.011961  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:19:10.064441  403525 cri.go:89] found id: "13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:10.064474  403525 cri.go:89] found id: ""
	I0731 18:19:10.064483  403525 logs.go:276] 1 containers: [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75]
	I0731 18:19:10.064549  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.070728  403525 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:19:10.070799  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:19:10.136832  403525 cri.go:89] found id: "eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:10.136857  403525 cri.go:89] found id: ""
	I0731 18:19:10.136866  403525 logs.go:276] 1 containers: [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71]
	I0731 18:19:10.136927  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.144262  403525 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:19:10.144332  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:19:10.195695  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:10.213139  403525 cri.go:89] found id: "7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:10.213162  403525 cri.go:89] found id: ""
	I0731 18:19:10.213172  403525 logs.go:276] 1 containers: [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa]
	I0731 18:19:10.213234  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.224629  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:19:10.224720  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:19:10.279274  403525 cri.go:89] found id: "d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:10.279301  403525 cri.go:89] found id: ""
	I0731 18:19:10.279310  403525 logs.go:276] 1 containers: [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49]
	I0731 18:19:10.279371  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.284466  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:19:10.284551  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:19:10.300946  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:10.359726  403525 cri.go:89] found id: "ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:10.359755  403525 cri.go:89] found id: ""
	I0731 18:19:10.359764  403525 logs.go:276] 1 containers: [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4]
	I0731 18:19:10.359821  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.370265  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:19:10.370334  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:19:10.371717  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:10.437469  403525 cri.go:89] found id: "7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:10.437502  403525 cri.go:89] found id: ""
	I0731 18:19:10.437513  403525 logs.go:276] 1 containers: [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45]
	I0731 18:19:10.437574  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:10.448766  403525 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:19:10.448838  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:19:10.522724  403525 cri.go:89] found id: ""
	I0731 18:19:10.522760  403525 logs.go:276] 0 containers: []
	W0731 18:19:10.522772  403525 logs.go:278] No container was found matching "kindnet"
	I0731 18:19:10.522786  403525 logs.go:123] Gathering logs for kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] ...
	I0731 18:19:10.522802  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:10.599232  403525 logs.go:123] Gathering logs for kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] ...
	I0731 18:19:10.599266  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:10.688535  403525 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:19:10.688575  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:19:10.697920  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:10.801858  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:10.873166  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:11.138263  403525 logs.go:123] Gathering logs for kubelet ...
	I0731 18:19:11.138307  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:19:11.212524  403525 logs.go:123] Gathering logs for dmesg ...
	I0731 18:19:11.212571  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:19:11.237065  403525 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:19:11.237105  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:19:11.436983  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:11.439914  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:11.443799  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:11.512672  403525 logs.go:123] Gathering logs for kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] ...
	I0731 18:19:11.512707  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:11.628979  403525 logs.go:123] Gathering logs for coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] ...
	I0731 18:19:11.629032  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:11.696455  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:11.794828  403525 logs.go:123] Gathering logs for etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] ...
	I0731 18:19:11.794862  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:11.801385  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:11.873249  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:11.943226  403525 logs.go:123] Gathering logs for kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] ...
	I0731 18:19:11.943265  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:12.024804  403525 logs.go:123] Gathering logs for container status ...
	I0731 18:19:12.024844  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:19:12.199082  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:12.301620  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:12.370370  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:12.698409  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:12.801700  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:12.870543  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:13.195595  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:13.300301  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:13.372463  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:13.698085  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:13.801520  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:13.871010  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:14.195979  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:14.301977  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:14.373363  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:14.645589  403525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:19:14.675057  403525 api_server.go:72] duration metric: took 1m20.133774799s to wait for apiserver process to appear ...
	I0731 18:19:14.675093  403525 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:19:14.675141  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:19:14.675201  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:19:14.695695  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:14.756388  403525 cri.go:89] found id: "13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:14.756416  403525 cri.go:89] found id: ""
	I0731 18:19:14.756426  403525 logs.go:276] 1 containers: [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75]
	I0731 18:19:14.756489  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.762824  403525 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:19:14.762898  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:19:14.800889  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:14.826370  403525 cri.go:89] found id: "eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:14.826395  403525 cri.go:89] found id: ""
	I0731 18:19:14.826403  403525 logs.go:276] 1 containers: [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71]
	I0731 18:19:14.826451  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.832743  403525 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:19:14.832821  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:19:14.870687  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:14.904866  403525 cri.go:89] found id: "7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:14.904897  403525 cri.go:89] found id: ""
	I0731 18:19:14.904907  403525 logs.go:276] 1 containers: [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa]
	I0731 18:19:14.904971  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.918138  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:19:14.918226  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:19:14.969853  403525 cri.go:89] found id: "d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:14.969882  403525 cri.go:89] found id: ""
	I0731 18:19:14.969892  403525 logs.go:276] 1 containers: [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49]
	I0731 18:19:14.969956  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:14.974303  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:19:14.974364  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:19:15.029569  403525 cri.go:89] found id: "ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:15.029600  403525 cri.go:89] found id: ""
	I0731 18:19:15.029611  403525 logs.go:276] 1 containers: [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4]
	I0731 18:19:15.029674  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:15.035633  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:19:15.035713  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:19:15.099817  403525 cri.go:89] found id: "7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:15.099839  403525 cri.go:89] found id: ""
	I0731 18:19:15.099847  403525 logs.go:276] 1 containers: [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45]
	I0731 18:19:15.099917  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:15.104451  403525 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:19:15.104523  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:19:15.196210  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:15.203523  403525 cri.go:89] found id: ""
	I0731 18:19:15.203548  403525 logs.go:276] 0 containers: []
	W0731 18:19:15.203555  403525 logs.go:278] No container was found matching "kindnet"
	I0731 18:19:15.203564  403525 logs.go:123] Gathering logs for kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] ...
	I0731 18:19:15.203576  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:15.255713  403525 logs.go:123] Gathering logs for kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] ...
	I0731 18:19:15.255744  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:15.301413  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:15.347024  403525 logs.go:123] Gathering logs for container status ...
	I0731 18:19:15.347060  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:19:15.376208  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:15.502945  403525 logs.go:123] Gathering logs for kubelet ...
	I0731 18:19:15.502989  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:19:15.594900  403525 logs.go:123] Gathering logs for dmesg ...
	I0731 18:19:15.594938  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:19:15.648950  403525 logs.go:123] Gathering logs for etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] ...
	I0731 18:19:15.648977  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:15.699332  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:15.738140  403525 logs.go:123] Gathering logs for coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] ...
	I0731 18:19:15.738185  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:15.801674  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:15.823682  403525 logs.go:123] Gathering logs for kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] ...
	I0731 18:19:15.823725  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:15.871471  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:15.874474  403525 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:19:15.874505  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:19:16.142628  403525 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:19:16.142682  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:19:16.199759  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:16.300715  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:16.369501  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:16.466148  403525 logs.go:123] Gathering logs for kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] ...
	I0731 18:19:16.466186  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:16.696021  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:16.803728  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:16.870141  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:17.195394  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:17.301182  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:17.370975  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:17.695568  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:17.801339  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:17.869932  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:18.198852  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:18.300926  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:18.370635  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:18.695496  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:18.801891  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:18.870661  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:19.026979  403525 api_server.go:253] Checking apiserver healthz at https://192.168.39.187:8443/healthz ...
	I0731 18:19:19.031422  403525 api_server.go:279] https://192.168.39.187:8443/healthz returned 200:
	ok
	I0731 18:19:19.032293  403525 api_server.go:141] control plane version: v1.30.3
	I0731 18:19:19.032315  403525 api_server.go:131] duration metric: took 4.357214363s to wait for apiserver health ...
	I0731 18:19:19.032323  403525 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:19:19.032345  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 18:19:19.032412  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 18:19:19.103705  403525 cri.go:89] found id: "13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:19.103733  403525 cri.go:89] found id: ""
	I0731 18:19:19.103742  403525 logs.go:276] 1 containers: [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75]
	I0731 18:19:19.103808  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.117954  403525 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 18:19:19.118042  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 18:19:19.196252  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:19.227134  403525 cri.go:89] found id: "eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:19.227156  403525 cri.go:89] found id: ""
	I0731 18:19:19.227164  403525 logs.go:276] 1 containers: [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71]
	I0731 18:19:19.227224  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.244526  403525 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 18:19:19.244600  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 18:19:19.302026  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:19.373138  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:19.388765  403525 cri.go:89] found id: "7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:19.388787  403525 cri.go:89] found id: ""
	I0731 18:19:19.388796  403525 logs.go:276] 1 containers: [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa]
	I0731 18:19:19.388860  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.395479  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 18:19:19.395546  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 18:19:19.478364  403525 cri.go:89] found id: "d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:19.478387  403525 cri.go:89] found id: ""
	I0731 18:19:19.478395  403525 logs.go:276] 1 containers: [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49]
	I0731 18:19:19.478446  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.485102  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 18:19:19.485191  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 18:19:19.574690  403525 cri.go:89] found id: "ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:19.574720  403525 cri.go:89] found id: ""
	I0731 18:19:19.574731  403525 logs.go:276] 1 containers: [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4]
	I0731 18:19:19.574790  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.581356  403525 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 18:19:19.581424  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 18:19:19.637028  403525 cri.go:89] found id: "7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:19.637057  403525 cri.go:89] found id: ""
	I0731 18:19:19.637067  403525 logs.go:276] 1 containers: [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45]
	I0731 18:19:19.637118  403525 ssh_runner.go:195] Run: which crictl
	I0731 18:19:19.647252  403525 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 18:19:19.647322  403525 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 18:19:19.695924  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:19.742533  403525 cri.go:89] found id: ""
	I0731 18:19:19.742566  403525 logs.go:276] 0 containers: []
	W0731 18:19:19.742581  403525 logs.go:278] No container was found matching "kindnet"
	I0731 18:19:19.742594  403525 logs.go:123] Gathering logs for kubelet ...
	I0731 18:19:19.742609  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 18:19:19.802263  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:19.828576  403525 logs.go:123] Gathering logs for etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] ...
	I0731 18:19:19.828620  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71"
	I0731 18:19:19.874855  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:19.912389  403525 logs.go:123] Gathering logs for coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] ...
	I0731 18:19:19.912429  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa"
	I0731 18:19:19.961358  403525 logs.go:123] Gathering logs for kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] ...
	I0731 18:19:19.961393  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4"
	I0731 18:19:20.006161  403525 logs.go:123] Gathering logs for CRI-O ...
	I0731 18:19:20.006190  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 18:19:20.213339  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:20.269951  403525 logs.go:123] Gathering logs for dmesg ...
	I0731 18:19:20.269991  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 18:19:20.301109  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:20.326282  403525 logs.go:123] Gathering logs for describe nodes ...
	I0731 18:19:20.326313  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 18:19:20.379286  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:20.613117  403525 logs.go:123] Gathering logs for kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] ...
	I0731 18:19:20.613155  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75"
	I0731 18:19:20.696356  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:20.732212  403525 logs.go:123] Gathering logs for kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] ...
	I0731 18:19:20.732272  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49"
	I0731 18:19:20.826560  403525 logs.go:123] Gathering logs for kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] ...
	I0731 18:19:20.826601  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45"
	I0731 18:19:20.897422  403525 logs.go:123] Gathering logs for container status ...
	I0731 18:19:20.897463  403525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 18:19:21.230372  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:21.231213  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:21.234970  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:21.302479  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:21.373293  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:21.697261  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:21.802828  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:21.870492  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:22.196615  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:22.301760  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:22.371134  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:22.696346  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:22.801168  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:22.871034  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:23.199691  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:23.301451  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:23.372668  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:23.456371  403525 system_pods.go:59] 18 kube-system pods found
	I0731 18:19:23.456436  403525 system_pods.go:61] "coredns-7db6d8ff4d-kh5dt" [6887255e-d5e1-4423-9b1b-b89bd6b54f70] Running
	I0731 18:19:23.456443  403525 system_pods.go:61] "csi-hostpath-attacher-0" [03f43e9b-6d84-4f4a-b5e1-6b348f9c91d4] Running
	I0731 18:19:23.456447  403525 system_pods.go:61] "csi-hostpath-resizer-0" [bcc4df0c-9611-46c6-9717-15211248b171] Running
	I0731 18:19:23.456454  403525 system_pods.go:61] "csi-hostpathplugin-drwcw" [21a11011-6c40-4c70-bfbc-dd33b6d1fb5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 18:19:23.456458  403525 system_pods.go:61] "etcd-addons-469211" [5d9bba9b-c463-4783-ae70-a807ef84974b] Running
	I0731 18:19:23.456463  403525 system_pods.go:61] "kube-apiserver-addons-469211" [ff095112-6ec8-4380-a753-4927e2405f76] Running
	I0731 18:19:23.456467  403525 system_pods.go:61] "kube-controller-manager-addons-469211" [0f6b7f2d-cd72-452d-91a3-900b66b7dc9f] Running
	I0731 18:19:23.456470  403525 system_pods.go:61] "kube-ingress-dns-minikube" [765bcaec-3909-45a4-abcb-5d18e0090e88] Running
	I0731 18:19:23.456473  403525 system_pods.go:61] "kube-proxy-rmpj2" [6306a255-80b3-4112-bc3b-fb6a294bbd1e] Running
	I0731 18:19:23.456476  403525 system_pods.go:61] "kube-scheduler-addons-469211" [d9bb8dab-ccb7-4ee4-b61f-b21b9ae99244] Running
	I0731 18:19:23.456481  403525 system_pods.go:61] "metrics-server-c59844bb4-h86lf" [9ac7112e-a869-4a80-9630-3e06fb408aa7] Running
	I0731 18:19:23.456484  403525 system_pods.go:61] "nvidia-device-plugin-daemonset-rnrgk" [63c8e69d-6346-4ca1-869b-ff23aa567942] Running
	I0731 18:19:23.456486  403525 system_pods.go:61] "registry-698f998955-zzckf" [c1bb2989-95fe-499e-a046-21d50fcaa446] Running
	I0731 18:19:23.456489  403525 system_pods.go:61] "registry-proxy-gkcvq" [5d23ea46-e28f-4922-8b86-7e1f8ea26754] Running
	I0731 18:19:23.456492  403525 system_pods.go:61] "snapshot-controller-745499f584-8spcg" [2c8b2ba3-9621-4deb-b551-b65f868d47ec] Running
	I0731 18:19:23.456495  403525 system_pods.go:61] "snapshot-controller-745499f584-g74pq" [065f5ffc-cb71-467c-9262-a27862811292] Running
	I0731 18:19:23.456497  403525 system_pods.go:61] "storage-provisioner" [d5ca3d3e-8350-4485-9a29-3a8eff61533d] Running
	I0731 18:19:23.456501  403525 system_pods.go:61] "tiller-deploy-6677d64bcd-8hlxh" [d2d05195-43ba-4de7-91ee-2237d543c3b1] Running
	I0731 18:19:23.456508  403525 system_pods.go:74] duration metric: took 4.424178406s to wait for pod list to return data ...
	I0731 18:19:23.456518  403525 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:19:23.458634  403525 default_sa.go:45] found service account: "default"
	I0731 18:19:23.458653  403525 default_sa.go:55] duration metric: took 2.128393ms for default service account to be created ...
	I0731 18:19:23.458660  403525 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:19:23.469044  403525 system_pods.go:86] 18 kube-system pods found
	I0731 18:19:23.469082  403525 system_pods.go:89] "coredns-7db6d8ff4d-kh5dt" [6887255e-d5e1-4423-9b1b-b89bd6b54f70] Running
	I0731 18:19:23.469091  403525 system_pods.go:89] "csi-hostpath-attacher-0" [03f43e9b-6d84-4f4a-b5e1-6b348f9c91d4] Running
	I0731 18:19:23.469098  403525 system_pods.go:89] "csi-hostpath-resizer-0" [bcc4df0c-9611-46c6-9717-15211248b171] Running
	I0731 18:19:23.469110  403525 system_pods.go:89] "csi-hostpathplugin-drwcw" [21a11011-6c40-4c70-bfbc-dd33b6d1fb5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0731 18:19:23.469119  403525 system_pods.go:89] "etcd-addons-469211" [5d9bba9b-c463-4783-ae70-a807ef84974b] Running
	I0731 18:19:23.469128  403525 system_pods.go:89] "kube-apiserver-addons-469211" [ff095112-6ec8-4380-a753-4927e2405f76] Running
	I0731 18:19:23.469134  403525 system_pods.go:89] "kube-controller-manager-addons-469211" [0f6b7f2d-cd72-452d-91a3-900b66b7dc9f] Running
	I0731 18:19:23.469141  403525 system_pods.go:89] "kube-ingress-dns-minikube" [765bcaec-3909-45a4-abcb-5d18e0090e88] Running
	I0731 18:19:23.469147  403525 system_pods.go:89] "kube-proxy-rmpj2" [6306a255-80b3-4112-bc3b-fb6a294bbd1e] Running
	I0731 18:19:23.469153  403525 system_pods.go:89] "kube-scheduler-addons-469211" [d9bb8dab-ccb7-4ee4-b61f-b21b9ae99244] Running
	I0731 18:19:23.469164  403525 system_pods.go:89] "metrics-server-c59844bb4-h86lf" [9ac7112e-a869-4a80-9630-3e06fb408aa7] Running
	I0731 18:19:23.469170  403525 system_pods.go:89] "nvidia-device-plugin-daemonset-rnrgk" [63c8e69d-6346-4ca1-869b-ff23aa567942] Running
	I0731 18:19:23.469177  403525 system_pods.go:89] "registry-698f998955-zzckf" [c1bb2989-95fe-499e-a046-21d50fcaa446] Running
	I0731 18:19:23.469186  403525 system_pods.go:89] "registry-proxy-gkcvq" [5d23ea46-e28f-4922-8b86-7e1f8ea26754] Running
	I0731 18:19:23.469193  403525 system_pods.go:89] "snapshot-controller-745499f584-8spcg" [2c8b2ba3-9621-4deb-b551-b65f868d47ec] Running
	I0731 18:19:23.469202  403525 system_pods.go:89] "snapshot-controller-745499f584-g74pq" [065f5ffc-cb71-467c-9262-a27862811292] Running
	I0731 18:19:23.469208  403525 system_pods.go:89] "storage-provisioner" [d5ca3d3e-8350-4485-9a29-3a8eff61533d] Running
	I0731 18:19:23.469214  403525 system_pods.go:89] "tiller-deploy-6677d64bcd-8hlxh" [d2d05195-43ba-4de7-91ee-2237d543c3b1] Running
	I0731 18:19:23.469226  403525 system_pods.go:126] duration metric: took 10.560847ms to wait for k8s-apps to be running ...
	I0731 18:19:23.469236  403525 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:19:23.469290  403525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:19:23.484343  403525 system_svc.go:56] duration metric: took 15.096025ms WaitForService to wait for kubelet
	I0731 18:19:23.484395  403525 kubeadm.go:582] duration metric: took 1m28.943115598s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:19:23.484423  403525 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:19:23.487522  403525 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:19:23.487553  403525 node_conditions.go:123] node cpu capacity is 2
	I0731 18:19:23.487570  403525 node_conditions.go:105] duration metric: took 3.141253ms to run NodePressure ...
	I0731 18:19:23.487582  403525 start.go:241] waiting for startup goroutines ...
	I0731 18:19:23.695726  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:23.800927  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:23.870887  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:24.195218  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:24.302499  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:24.371178  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:24.695484  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:24.802764  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:24.871487  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:25.198499  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:25.301463  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:25.372077  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 18:19:25.696241  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:25.801634  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:25.870056  403525 kapi.go:107] duration metric: took 1m21.505171282s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 18:19:26.195775  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:26.300765  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:26.696089  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:26.801356  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:27.196441  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:27.301302  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:27.697983  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:27.800798  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:28.196944  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:28.301348  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:28.697257  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:28.801327  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:29.196294  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:29.301665  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:29.696872  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:29.801664  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:30.196528  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:30.300564  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:30.696015  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:30.801254  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:31.196872  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:31.301446  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:31.695773  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:31.801188  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:32.196364  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:32.301882  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:32.696040  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:32.800989  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:33.196657  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:33.301047  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:33.695775  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:33.801004  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:34.196185  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:34.301182  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:34.696092  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:34.800958  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:35.196161  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:35.301270  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:35.696662  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:35.802722  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:36.195864  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:36.301220  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:36.696885  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:36.800539  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:37.198025  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:37.301391  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:37.695476  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:37.802177  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:38.197454  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:38.301395  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:38.695516  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:38.802415  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:39.198400  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:39.302482  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:39.697109  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:39.801853  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:40.196984  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:40.301444  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:40.698370  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:40.801988  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:41.197970  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:41.301343  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:41.698484  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:41.800758  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:42.196487  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:42.302053  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:42.696150  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:42.801849  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:43.196161  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:43.301309  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:43.697416  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:43.801461  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:44.197607  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:44.300954  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:44.698302  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:44.801182  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:45.196071  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:45.300840  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:45.695891  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:45.800674  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:46.195755  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:46.300968  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:46.696330  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:46.801746  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:47.195687  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:47.301041  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:47.696561  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:47.803760  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:48.195692  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:48.300620  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:48.696036  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:48.801592  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:49.195810  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:49.302712  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:49.697201  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:49.800720  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:50.195921  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:50.301320  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:50.696685  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:50.801091  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:51.197782  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:51.302176  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:51.696197  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:51.803439  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:52.195806  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:52.302046  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:52.695953  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:52.801281  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:53.196975  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:53.301622  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:53.695638  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:53.800694  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:54.195624  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:54.301900  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:54.696131  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:54.800625  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:55.195639  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:55.301135  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:55.696541  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:55.800657  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:56.195678  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:56.302029  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:56.697310  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:56.801576  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:57.195667  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:57.300886  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:57.875007  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:57.875173  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:58.195796  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:58.300819  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:58.707374  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:58.801716  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:59.197626  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:59.301151  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:19:59.695100  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:19:59.804439  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:00.195951  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:00.302906  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:00.699386  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:00.803027  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:01.196939  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:01.304213  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:01.696902  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:01.801802  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:02.196111  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:02.301776  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:02.700369  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:02.802122  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:03.195375  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:03.301946  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:03.696225  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:03.801823  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:04.195551  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:04.301732  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:04.696950  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:04.801645  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:05.196018  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:05.301379  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:05.695912  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:05.801700  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:06.195541  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:06.302187  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:06.696502  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:06.802537  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:07.197200  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:07.301933  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:07.695694  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:07.801038  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:08.196147  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:08.303949  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:08.696135  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:08.808147  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:09.196730  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:09.300914  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:09.696502  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:09.801172  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:10.196645  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:10.301610  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:10.696405  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:10.803834  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:11.195543  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:11.301332  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:11.696392  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:11.802054  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:12.195994  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:12.301324  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:12.696659  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:12.803772  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:13.195603  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:13.300686  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:13.695855  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:13.803052  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:14.196546  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:14.303073  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:14.696183  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:14.801818  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:15.196058  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:15.302117  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:15.696996  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:15.801685  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:16.196648  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:16.301935  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:16.695621  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:16.801151  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:17.195697  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:17.301025  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:17.696281  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:17.801233  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:18.196489  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:18.300604  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:18.695672  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:18.800934  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:19.196308  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:19.301712  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:19.696333  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:19.801219  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:20.196204  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:20.301140  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:20.696588  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:20.800399  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:21.194990  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:21.303471  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:21.881438  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:21.882192  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:22.195899  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:22.301018  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:22.696031  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:22.801084  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:23.195661  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:23.301004  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:23.696295  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:23.801831  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:24.518414  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:24.518706  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:24.695391  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:24.801370  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:25.195876  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:25.301875  403525 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 18:20:25.696349  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:25.803810  403525 kapi.go:107] duration metric: took 2m23.007579325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 18:20:26.195448  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:26.696321  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:27.196013  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:27.695324  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:28.196218  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:28.696103  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:29.199643  403525 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 18:20:29.696435  403525 kapi.go:107] duration metric: took 2m23.00437999s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 18:20:29.698082  403525 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-469211 cluster.
	I0731 18:20:29.699465  403525 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 18:20:29.701014  403525 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 18:20:29.702449  403525 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 18:20:29.704000  403525 addons.go:510] duration metric: took 2m35.162698923s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 18:20:29.704052  403525 start.go:246] waiting for cluster config update ...
	I0731 18:20:29.704073  403525 start.go:255] writing updated cluster config ...
	I0731 18:20:29.704366  403525 ssh_runner.go:195] Run: rm -f paused
	I0731 18:20:29.757590  403525 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:20:29.759528  403525 out.go:177] * Done! kubectl is now configured to use "addons-469211" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.528121439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450429528093042,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=128b23eb-3c90-4afb-81c5-e9cea730000f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.528693451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a590205f-4a07-4b8b-b346-f1ac6949b669 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.528745386Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a590205f-4a07-4b8b-b346-f1ac6949b669 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.529046701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172244985537762
4034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449855411737181,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a590205f-4a07-4b8b-b346-f1ac6949b669 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.571232427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54283a32-60e7-45c0-b902-c1ffb34ceaf8 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.571306801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54283a32-60e7-45c0-b902-c1ffb34ceaf8 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.572415058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=026f10ed-882d-4724-9711-fc3c70f1a93b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.573920350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450429573891038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=026f10ed-882d-4724-9711-fc3c70f1a93b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.574597712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c219f3a-cc18-4668-9fd3-066e6cd61a9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.574706425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c219f3a-cc18-4668-9fd3-066e6cd61a9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.575022544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172244985537762
4034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449855411737181,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c219f3a-cc18-4668-9fd3-066e6cd61a9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.615286272Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a42e6dce-9643-4fbe-8145-e851a227c9a4 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.615362816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a42e6dce-9643-4fbe-8145-e851a227c9a4 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.617438373Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1177738-6c21-41b1-9f08-88a66a4daf77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.618834310Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450429618798556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1177738-6c21-41b1-9f08-88a66a4daf77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.619418906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdf6538b-7a63-4eb6-a625-4194815bd2ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.619480366Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdf6538b-7a63-4eb6-a625-4194815bd2ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.619737185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172244985537762
4034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449855411737181,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdf6538b-7a63-4eb6-a625-4194815bd2ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.655491996Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ecb18f72-4cf4-4ab5-a60b-f13be7e2f967 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.655563621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ecb18f72-4cf4-4ab5-a60b-f13be7e2f967 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.657078835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34321b8e-41de-412b-b79d-fe3e05b9745c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.658482404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722450429658453421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589582,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34321b8e-41de-412b-b79d-fe3e05b9745c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.659160035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f6fdff2-9aac-48b8-83df-0a5cc213626e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.659228303Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f6fdff2-9aac-48b8-83df-0a5cc213626e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:27:09 addons-469211 crio[683]: time="2024-07-31 18:27:09.659507209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:600e268c9b4c779f4c3459a4aafe1064b3e44d871bd1fb48a9ab77b62fb2ec82,PodSandboxId:f45c4c6caa6836c7ca358084fe9cab7d0cfec3b332e3fef91ba3a3d338bf53c2,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722450240655090248,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tltx6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5d8865e5-0846-480c-9d65-d004373b16c8,},Annotations:map[string]string{io.kubernetes.container.hash: 73525ae6,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95c4be42439962a6762db6498bddf1bd7aced8228cb52c342d8c538f92ee4ba0,PodSandboxId:60aba2a2c1db3b933a74f1c0e5bd7dcaf0e7f9646570b5638fc3b92cb5014984,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722450099883352904,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7a453bb4-7b63-4a8b-b605-225347030b7b,},Annotations:map[string]string{io.kubernet
es.container.hash: 58ef09ed,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c0e8d72cb1666027258058c9b21348d9ccc70819f4be4d421c22dad305717cb,PodSandboxId:e1855c794cdd1d26055db487159ba3007f2d7060c23d39cce309da536710c944,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722450036104687975,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 128508bf-9603-4417-b
d09-541f203f2386,},Annotations:map[string]string{io.kubernetes.container.hash: cc05f1a8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f,PodSandboxId:0347af01ac0ac894b011c6a304d7249596934e67a5e7b40410c2a87820873a6c,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722449887523906983,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-h86lf,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 9ac7112e-a869-4a80-9630-3e06fb408aa7,},Annotations:map[string]string{io.kubernetes.container.hash: 4944eeef,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352,PodSandboxId:6827f9cf8e470257b83de1dff2ea253f5530b7c083eec943d0849ab7155243be,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722449884641566598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5ca3d3e-8350-4485-9a29-3a8eff61533d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6256b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa,PodSandboxId:1d35d21e498d6957c3c2352e4a8c9738f397b681ce100eac273aa0dc2ff072d5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722449877771017499,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-kh5dt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6887255e-d5e1-4423-9b1b-b89bd6b54f70,},Annotations:map[string]string{io.kubernetes.container.hash: 90db1bb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4,PodSandboxId:888165c5f82a280bf24a0854c77f82e1b5fb8fc0789c4b0f024b00d0735dfc20,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d
01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722449875296290358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmpj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6306a255-80b3-4112-bc3b-fb6a294bbd1e,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfdeda3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49,PodSandboxId:60a993a58627d63c0113b221eff3925192fd195c7aa422f319d370810dcaad21,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5
ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722449855433310984,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5285c8f695ccfc514c2932e9d15a4fd,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71,PodSandboxId:ad37fbb5b021847ebe95c1e59dd9cfee96d0c6c674c97ecbb9946fb9e0ab0d98,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAI
NER_RUNNING,CreatedAt:1722449855449736343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a157d0a30c4bb7b9a0d50df15b6d8e59,},Annotations:map[string]string{io.kubernetes.container.hash: 9737cd2f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75,PodSandboxId:a0dfcf2af5767354aac217406128338c407f63e264ce91371fae43bdb941fd94,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:172244985537762
4034,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 952470aefc8105a29ecdb2b616a845cd,},Annotations:map[string]string{io.kubernetes.container.hash: d020afc7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45,PodSandboxId:f04044c1f36fa550a8417101d694e3fa123371e65a877030da1b4f102baae589,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722449855411737181,Lab
els:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-469211,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbc0f0e5b2d72e5f998e2e93ca972466,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f6fdff2-9aac-48b8-83df-0a5cc213626e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	600e268c9b4c7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   f45c4c6caa683       hello-world-app-6778b5fc9f-tltx6
	95c4be4243996       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   60aba2a2c1db3       nginx
	6c0e8d72cb166       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   e1855c794cdd1       busybox
	9b2fedbba32da       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   9 minutes ago       Running             metrics-server            0                   0347af01ac0ac       metrics-server-c59844bb4-h86lf
	ee60a7abb89e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        9 minutes ago       Running             storage-provisioner       0                   6827f9cf8e470       storage-provisioner
	7536452f0fcb4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        9 minutes ago       Running             coredns                   0                   1d35d21e498d6       coredns-7db6d8ff4d-kh5dt
	ba90a67fa1aa5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        9 minutes ago       Running             kube-proxy                0                   888165c5f82a2       kube-proxy-rmpj2
	eb02210ee6a3d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        9 minutes ago       Running             etcd                      0                   ad37fbb5b0218       etcd-addons-469211
	d85e7e6d15dcb       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        9 minutes ago       Running             kube-scheduler            0                   60a993a58627d       kube-scheduler-addons-469211
	7b5254e9d9289       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        9 minutes ago       Running             kube-controller-manager   0                   f04044c1f36fa       kube-controller-manager-addons-469211
	13115e6c0aea5       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        9 minutes ago       Running             kube-apiserver            0                   a0dfcf2af5767       kube-apiserver-addons-469211
	
	
	==> coredns [7536452f0fcb4a773216e6bd3bcfbd43bb2a93be21692933b157b1ecfa8c48fa] <==
	[INFO] 10.244.0.8:51881 - 54780 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152344s
	[INFO] 10.244.0.8:56392 - 37732 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150094s
	[INFO] 10.244.0.8:56392 - 17511 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000177352s
	[INFO] 10.244.0.8:50233 - 17950 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000150563s
	[INFO] 10.244.0.8:50233 - 23840 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000639972s
	[INFO] 10.244.0.8:51520 - 50804 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000089098s
	[INFO] 10.244.0.8:51520 - 43382 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000059882s
	[INFO] 10.244.0.8:50450 - 25131 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00007004s
	[INFO] 10.244.0.8:50450 - 19750 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00002636s
	[INFO] 10.244.0.8:59271 - 38940 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000084895s
	[INFO] 10.244.0.8:59271 - 25630 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012547s
	[INFO] 10.244.0.8:55317 - 33831 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059959s
	[INFO] 10.244.0.8:55317 - 33829 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041544s
	[INFO] 10.244.0.8:35059 - 49500 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000053004s
	[INFO] 10.244.0.8:35059 - 30303 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000102092s
	[INFO] 10.244.0.22:40097 - 41793 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000590028s
	[INFO] 10.244.0.22:60712 - 7890 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151206s
	[INFO] 10.244.0.22:58114 - 47304 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124492s
	[INFO] 10.244.0.22:54165 - 29774 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000064445s
	[INFO] 10.244.0.22:54634 - 54026 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008677s
	[INFO] 10.244.0.22:34467 - 55534 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000055567s
	[INFO] 10.244.0.22:58813 - 27925 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000642159s
	[INFO] 10.244.0.22:36111 - 33575 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000980802s
	[INFO] 10.244.0.27:55602 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000594832s
	[INFO] 10.244.0.27:35466 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163617s
	
	
	==> describe nodes <==
	Name:               addons-469211
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-469211
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=addons-469211
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_17_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-469211
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:17:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-469211
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:27:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:24:18 +0000   Wed, 31 Jul 2024 18:17:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:24:18 +0000   Wed, 31 Jul 2024 18:17:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:24:18 +0000   Wed, 31 Jul 2024 18:17:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:24:18 +0000   Wed, 31 Jul 2024 18:17:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.187
	  Hostname:    addons-469211
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb416ebe65de43479a11c073f8c2776c
	  System UUID:                fb416ebe-65de-4347-9a11-c073f8c2776c
	  Boot ID:                    83f919af-68de-44e4-bc69-505ed3b07279
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  default                     hello-world-app-6778b5fc9f-tltx6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 coredns-7db6d8ff4d-kh5dt                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     9m16s
	  kube-system                 etcd-addons-469211                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-apiserver-addons-469211             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-controller-manager-addons-469211    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 kube-proxy-rmpj2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                 kube-scheduler-addons-469211             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m13s                  kube-proxy       
	  Normal  Starting                 9m36s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m36s (x8 over 9m36s)  kubelet          Node addons-469211 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m36s (x8 over 9m36s)  kubelet          Node addons-469211 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m36s (x7 over 9m36s)  kubelet          Node addons-469211 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m30s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m30s                  kubelet          Node addons-469211 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s                  kubelet          Node addons-469211 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s                  kubelet          Node addons-469211 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m29s                  kubelet          Node addons-469211 status is now: NodeReady
	  Normal  RegisteredNode           9m17s                  node-controller  Node addons-469211 event: Registered Node addons-469211 in Controller
	
	
	==> dmesg <==
	[Jul31 18:18] kauditd_printk_skb: 83 callbacks suppressed
	[ +10.790338] kauditd_printk_skb: 138 callbacks suppressed
	[ +22.950617] kauditd_printk_skb: 4 callbacks suppressed
	[Jul31 18:19] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.550451] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.602192] kauditd_printk_skb: 76 callbacks suppressed
	[ +16.458374] kauditd_printk_skb: 14 callbacks suppressed
	[ +22.058454] kauditd_printk_skb: 24 callbacks suppressed
	[Jul31 18:20] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.971245] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.394336] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.254955] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.198225] kauditd_printk_skb: 34 callbacks suppressed
	[ +10.275643] kauditd_printk_skb: 24 callbacks suppressed
	[Jul31 18:21] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.299228] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.934665] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.172066] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.609330] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.773715] kauditd_printk_skb: 18 callbacks suppressed
	[  +8.235417] kauditd_printk_skb: 10 callbacks suppressed
	[ +10.180409] kauditd_printk_skb: 15 callbacks suppressed
	[Jul31 18:22] kauditd_printk_skb: 33 callbacks suppressed
	[Jul31 18:23] kauditd_printk_skb: 6 callbacks suppressed
	[Jul31 18:24] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [eb02210ee6a3d24a756fbc662c1c238a5f4479c4b868fc91ab87eb2fd66b1c71] <==
	{"level":"warn","ts":"2024-07-31T18:20:21.868733Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.938551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-31T18:20:21.868775Z","caller":"traceutil/trace.go:171","msg":"trace[1764405769] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1293; }","duration":"185.02019ms","start":"2024-07-31T18:20:21.683748Z","end":"2024-07-31T18:20:21.868768Z","steps":["trace[1764405769] 'agreement among raft nodes before linearized reading'  (duration: 184.900232ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:20:24.505149Z","caller":"traceutil/trace.go:171","msg":"trace[913952359] linearizableReadLoop","detail":"{readStateIndex:1351; appliedIndex:1350; }","duration":"321.27837ms","start":"2024-07-31T18:20:24.183856Z","end":"2024-07-31T18:20:24.505134Z","steps":["trace[913952359] 'read index received'  (duration: 320.974021ms)","trace[913952359] 'applied index is now lower than readState.Index'  (duration: 303.869µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T18:20:24.505305Z","caller":"traceutil/trace.go:171","msg":"trace[765517706] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"476.891773ms","start":"2024-07-31T18:20:24.028406Z","end":"2024-07-31T18:20:24.505298Z","steps":["trace[765517706] 'process raft request'  (duration: 476.462981ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:20:24.505437Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.186227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-31T18:20:24.505493Z","caller":"traceutil/trace.go:171","msg":"trace[472395463] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1296; }","duration":"217.276574ms","start":"2024-07-31T18:20:24.288208Z","end":"2024-07-31T18:20:24.505485Z","steps":["trace[472395463] 'agreement among raft nodes before linearized reading'  (duration: 217.135101ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:20:24.505587Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.749165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-07-31T18:20:24.505625Z","caller":"traceutil/trace.go:171","msg":"trace[1691468399] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1296; }","duration":"321.811979ms","start":"2024-07-31T18:20:24.183807Z","end":"2024-07-31T18:20:24.505619Z","steps":["trace[1691468399] 'agreement among raft nodes before linearized reading'  (duration: 321.722907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:20:24.505736Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:20:24.183794Z","time spent":"321.867252ms","remote":"127.0.0.1:41006","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4391,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-31T18:20:24.505454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:20:24.028386Z","time spent":"476.96416ms","remote":"127.0.0.1:41000","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1294 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-31T18:21:00.509458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.796315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T18:21:00.509533Z","caller":"traceutil/trace.go:171","msg":"trace[1113990574] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1514; }","duration":"341.981307ms","start":"2024-07-31T18:21:00.167532Z","end":"2024-07-31T18:21:00.509513Z","steps":["trace[1113990574] 'range keys from in-memory index tree'  (duration: 341.745763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:21:00.509566Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:21:00.167519Z","time spent":"342.037984ms","remote":"127.0.0.1:40848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-31T18:21:33.139639Z","caller":"traceutil/trace.go:171","msg":"trace[1924874950] transaction","detail":"{read_only:false; response_revision:1805; number_of_response:1; }","duration":"244.266945ms","start":"2024-07-31T18:21:32.895354Z","end":"2024-07-31T18:21:33.139621Z","steps":["trace[1924874950] 'process raft request'  (duration: 244.17525ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:21:33.139799Z","caller":"traceutil/trace.go:171","msg":"trace[2056968767] linearizableReadLoop","detail":"{readStateIndex:1883; appliedIndex:1883; }","duration":"215.626454ms","start":"2024-07-31T18:21:32.924158Z","end":"2024-07-31T18:21:33.139785Z","steps":["trace[2056968767] 'read index received'  (duration: 215.620709ms)","trace[2056968767] 'applied index is now lower than readState.Index'  (duration: 4.915µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T18:21:33.140102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.927984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8057"}
	{"level":"info","ts":"2024-07-31T18:21:33.140126Z","caller":"traceutil/trace.go:171","msg":"trace[1479707747] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1805; }","duration":"215.992251ms","start":"2024-07-31T18:21:32.924127Z","end":"2024-07-31T18:21:33.140119Z","steps":["trace[1479707747] 'agreement among raft nodes before linearized reading'  (duration: 215.73337ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:21:33.142508Z","caller":"traceutil/trace.go:171","msg":"trace[1879085037] transaction","detail":"{read_only:false; response_revision:1806; number_of_response:1; }","duration":"217.240015ms","start":"2024-07-31T18:21:32.925259Z","end":"2024-07-31T18:21:33.142499Z","steps":["trace[1879085037] 'process raft request'  (duration: 217.1653ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:21:35.586145Z","caller":"traceutil/trace.go:171","msg":"trace[2122175916] linearizableReadLoop","detail":"{readStateIndex:1887; appliedIndex:1886; }","duration":"439.964432ms","start":"2024-07-31T18:21:35.146157Z","end":"2024-07-31T18:21:35.586122Z","steps":["trace[2122175916] 'read index received'  (duration: 365.253182ms)","trace[2122175916] 'applied index is now lower than readState.Index'  (duration: 74.70998ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T18:21:35.586375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"440.18852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-07-31T18:21:35.586415Z","caller":"traceutil/trace.go:171","msg":"trace[1964439360] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1808; }","duration":"440.303507ms","start":"2024-07-31T18:21:35.146103Z","end":"2024-07-31T18:21:35.586407Z","steps":["trace[1964439360] 'agreement among raft nodes before linearized reading'  (duration: 440.13877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:21:35.586443Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:21:35.146091Z","time spent":"440.346397ms","remote":"127.0.0.1:41000","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1137,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2024-07-31T18:21:35.586639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.320664ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T18:21:35.586675Z","caller":"traceutil/trace.go:171","msg":"trace[1298070854] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1808; }","duration":"419.378907ms","start":"2024-07-31T18:21:35.167289Z","end":"2024-07-31T18:21:35.586668Z","steps":["trace[1298070854] 'agreement among raft nodes before linearized reading'  (duration: 419.327327ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:21:35.586698Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:21:35.167255Z","time spent":"419.438466ms","remote":"127.0.0.1:40848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	
	==> kernel <==
	 18:27:10 up 10 min,  0 users,  load average: 0.04, 0.50, 0.46
	Linux addons-469211 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [13115e6c0aea5aa738c96f2d3b07a624323bfbba70e5bbc50894ffa7efddcc75] <==
	E0731 18:19:09.548080       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.223.16:443: connect: connection refused
	E0731 18:19:09.552420       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1: Get "https://10.96.223.16:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.96.223.16:443: connect: connection refused
	I0731 18:19:09.669519       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0731 18:20:42.278725       1 conn.go:339] Error on socket receive: read tcp 192.168.39.187:8443->192.168.39.1:36566: use of closed network connection
	I0731 18:20:51.793556       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.34.248"}
	E0731 18:21:22.512739       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0731 18:21:24.930226       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0731 18:21:26.038342       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0731 18:21:30.465089       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0731 18:21:30.677078       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.27.6"}
	I0731 18:21:42.990060       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0731 18:22:02.468449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.468787       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.499650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.500137       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.526441       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.526510       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.538039       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.538091       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0731 18:22:02.572428       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0731 18:22:02.572716       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0731 18:22:03.527044       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0731 18:22:03.565762       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0731 18:22:03.588494       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0731 18:23:57.662754       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.152.232"}
	
	
	==> kube-controller-manager [7b5254e9d9289d5d35a6b4814a9b7f8295e540eb81734b7083f3c7e619650b45] <==
	W0731 18:24:40.800782       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:24:40.800889       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:24:51.880663       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:24:51.880813       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:25:13.948699       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:25:13.948884       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:25:14.686537       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:25:14.686653       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:25:37.467150       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:25:37.467308       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:25:49.413815       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:25:49.413850       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:26:10.061299       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:26:10.061628       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:26:14.551137       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:26:14.551247       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:26:30.672826       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:26:30.672882       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:26:37.348200       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:26:37.348269       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:26:55.347614       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:26:55.347676       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0731 18:27:08.237242       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0731 18:27:08.237282       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0731 18:27:08.615985       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="15.161µs"
	
	
	==> kube-proxy [ba90a67fa1aa5446619c21a246c97a8c2c3be4cfcc5d75cdf17bd7f06eb138a4] <==
	I0731 18:17:56.320752       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:17:56.344864       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.187"]
	I0731 18:17:56.436405       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:17:56.436477       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:17:56.436495       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:17:56.440356       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:17:56.440551       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:17:56.440580       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:17:56.442400       1 config.go:192] "Starting service config controller"
	I0731 18:17:56.442435       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:17:56.442463       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:17:56.442467       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:17:56.442923       1 config.go:319] "Starting node config controller"
	I0731 18:17:56.443000       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:17:56.543043       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:17:56.543085       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:17:56.543109       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d85e7e6d15dcb3fb5f41c2061d9919ca59fed75879f580a246f2024a1cdd8a49] <==
	W0731 18:17:38.898226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:17:38.898341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:17:38.912789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:17:38.912888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:17:38.983641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 18:17:38.983815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 18:17:38.998251       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 18:17:38.998305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 18:17:39.051449       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:17:39.051618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:17:39.093884       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:17:39.094040       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 18:17:39.104824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 18:17:39.104926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 18:17:39.150842       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:17:39.150927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 18:17:39.160439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 18:17:39.160487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 18:17:39.171110       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 18:17:39.171155       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 18:17:39.182728       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 18:17:39.182882       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 18:17:39.245643       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:17:39.246484       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 18:17:42.444836       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 18:24:41 addons-469211 kubelet[1274]: I0731 18:24:41.119135    1274 scope.go:117] "RemoveContainer" containerID="d4698288a9927ce57c2c8f4bd219b31023f842b12c6b2f9fcc8ac894ae32a81b"
	Jul 31 18:24:41 addons-469211 kubelet[1274]: I0731 18:24:41.135493    1274 scope.go:117] "RemoveContainer" containerID="0bec0543a2acc2df9fb43770d908e693188fb53f54872adfdabd3c95578b2766"
	Jul 31 18:25:40 addons-469211 kubelet[1274]: E0731 18:25:40.638661    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:25:40 addons-469211 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:25:40 addons-469211 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:25:40 addons-469211 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:25:40 addons-469211 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:26:01 addons-469211 kubelet[1274]: I0731 18:26:01.622788    1274 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 31 18:26:40 addons-469211 kubelet[1274]: E0731 18:26:40.640066    1274 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:26:40 addons-469211 kubelet[1274]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:26:40 addons-469211 kubelet[1274]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:26:40 addons-469211 kubelet[1274]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:26:40 addons-469211 kubelet[1274]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:27:08 addons-469211 kubelet[1274]: I0731 18:27:08.653794    1274 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-tltx6" podStartSLOduration=189.111280162 podStartE2EDuration="3m11.653755337s" podCreationTimestamp="2024-07-31 18:23:57 +0000 UTC" firstStartedPulling="2024-07-31 18:23:58.100765386 +0000 UTC m=+377.657438489" lastFinishedPulling="2024-07-31 18:24:00.64324056 +0000 UTC m=+380.199913664" observedRunningTime="2024-07-31 18:24:01.493692653 +0000 UTC m=+381.050365775" watchObservedRunningTime="2024-07-31 18:27:08.653755337 +0000 UTC m=+568.210428461"
	Jul 31 18:27:08 addons-469211 kubelet[1274]: E0731 18:27:08.795813    1274 server.go:304] "Unable to authenticate the request due to an error" err="[invalid bearer token, serviceaccounts \"metrics-server\" not found]"
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.114216    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scbvr\" (UniqueName: \"kubernetes.io/projected/9ac7112e-a869-4a80-9630-3e06fb408aa7-kube-api-access-scbvr\") pod \"9ac7112e-a869-4a80-9630-3e06fb408aa7\" (UID: \"9ac7112e-a869-4a80-9630-3e06fb408aa7\") "
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.114285    1274 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ac7112e-a869-4a80-9630-3e06fb408aa7-tmp-dir\") pod \"9ac7112e-a869-4a80-9630-3e06fb408aa7\" (UID: \"9ac7112e-a869-4a80-9630-3e06fb408aa7\") "
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.114670    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ac7112e-a869-4a80-9630-3e06fb408aa7-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9ac7112e-a869-4a80-9630-3e06fb408aa7" (UID: "9ac7112e-a869-4a80-9630-3e06fb408aa7"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.118525    1274 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac7112e-a869-4a80-9630-3e06fb408aa7-kube-api-access-scbvr" (OuterVolumeSpecName: "kube-api-access-scbvr") pod "9ac7112e-a869-4a80-9630-3e06fb408aa7" (UID: "9ac7112e-a869-4a80-9630-3e06fb408aa7"). InnerVolumeSpecName "kube-api-access-scbvr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.205815    1274 scope.go:117] "RemoveContainer" containerID="9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f"
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.214656    1274 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-scbvr\" (UniqueName: \"kubernetes.io/projected/9ac7112e-a869-4a80-9630-3e06fb408aa7-kube-api-access-scbvr\") on node \"addons-469211\" DevicePath \"\""
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.214876    1274 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9ac7112e-a869-4a80-9630-3e06fb408aa7-tmp-dir\") on node \"addons-469211\" DevicePath \"\""
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.254208    1274 scope.go:117] "RemoveContainer" containerID="9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f"
	Jul 31 18:27:10 addons-469211 kubelet[1274]: E0731 18:27:10.254916    1274 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f\": container with ID starting with 9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f not found: ID does not exist" containerID="9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f"
	Jul 31 18:27:10 addons-469211 kubelet[1274]: I0731 18:27:10.255022    1274 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f"} err="failed to get container status \"9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f\": rpc error: code = NotFound desc = could not find container \"9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f\": container with ID starting with 9b2fedbba32dafc23f5e654ebbf94c24e9dc2176c7bf2b98a5cf261666fae93f not found: ID does not exist"
	
	
	==> storage-provisioner [ee60a7abb89e95f18b23ba921a130003d74fcf48788ecd9679c9e2f43c470352] <==
	I0731 18:18:05.211872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 18:18:05.223080       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 18:18:05.223129       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0731 18:18:05.242789       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0731 18:18:05.243158       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-469211_dca2c41c-b1f5-41aa-831d-6265d8233ecb!
	I0731 18:18:05.243293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfb3c91f-ce3b-43a5-b167-157fd9383e5c", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-469211_dca2c41c-b1f5-41aa-831d-6265d8233ecb became leader
	I0731 18:18:05.343837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-469211_dca2c41c-b1f5-41aa-831d-6265d8233ecb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-469211 -n addons-469211
helpers_test.go:261: (dbg) Run:  kubectl --context addons-469211 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (349.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-469211
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-469211: exit status 82 (2m0.473278441s)

                                                
                                                
-- stdout --
	* Stopping node "addons-469211"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-469211" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-469211
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-469211: exit status 11 (21.645870221s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.187:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-469211" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-469211
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-469211: exit status 11 (6.143587179s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.187:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-469211" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-469211
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-469211: exit status 11 (6.144104617s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.187:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-469211" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 node stop m02 -v=7 --alsologtostderr
E0731 18:40:09.940928  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:40:32.741682  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:41:31.862043  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.480622144s)

                                                
                                                
-- stdout --
	* Stopping node "ha-326651-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:39:38.316674  418010 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:39:38.316949  418010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:39:38.316959  418010 out.go:304] Setting ErrFile to fd 2...
	I0731 18:39:38.316966  418010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:39:38.317164  418010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:39:38.317457  418010 mustload.go:65] Loading cluster: ha-326651
	I0731 18:39:38.318612  418010 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:39:38.318703  418010 stop.go:39] StopHost: ha-326651-m02
	I0731 18:39:38.319541  418010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:39:38.319609  418010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:39:38.335219  418010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0731 18:39:38.335735  418010 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:39:38.336339  418010 main.go:141] libmachine: Using API Version  1
	I0731 18:39:38.336394  418010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:39:38.336806  418010 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:39:38.339012  418010 out.go:177] * Stopping node "ha-326651-m02"  ...
	I0731 18:39:38.340472  418010 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 18:39:38.340512  418010 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:39:38.340788  418010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 18:39:38.340839  418010 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:39:38.344036  418010 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:39:38.344604  418010 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:39:38.344637  418010 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:39:38.344791  418010 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:39:38.344966  418010 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:39:38.345114  418010 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:39:38.345283  418010 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:39:38.432523  418010 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 18:39:38.487928  418010 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 18:39:38.543439  418010 main.go:141] libmachine: Stopping "ha-326651-m02"...
	I0731 18:39:38.543482  418010 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:39:38.545151  418010 main.go:141] libmachine: (ha-326651-m02) Calling .Stop
	I0731 18:39:38.549634  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 0/120
	I0731 18:39:39.550997  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 1/120
	I0731 18:39:40.552297  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 2/120
	I0731 18:39:41.554438  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 3/120
	I0731 18:39:42.556263  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 4/120
	I0731 18:39:43.558405  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 5/120
	I0731 18:39:44.559931  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 6/120
	I0731 18:39:45.561535  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 7/120
	I0731 18:39:46.562728  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 8/120
	I0731 18:39:47.564255  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 9/120
	I0731 18:39:48.565699  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 10/120
	I0731 18:39:49.567062  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 11/120
	I0731 18:39:50.568812  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 12/120
	I0731 18:39:51.571132  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 13/120
	I0731 18:39:52.572542  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 14/120
	I0731 18:39:53.574566  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 15/120
	I0731 18:39:54.576190  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 16/120
	I0731 18:39:55.577651  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 17/120
	I0731 18:39:56.579119  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 18/120
	I0731 18:39:57.580477  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 19/120
	I0731 18:39:58.582690  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 20/120
	I0731 18:39:59.584369  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 21/120
	I0731 18:40:00.586387  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 22/120
	I0731 18:40:01.587821  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 23/120
	I0731 18:40:02.589301  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 24/120
	I0731 18:40:03.590985  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 25/120
	I0731 18:40:04.592413  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 26/120
	I0731 18:40:05.593828  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 27/120
	I0731 18:40:06.595247  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 28/120
	I0731 18:40:07.597370  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 29/120
	I0731 18:40:08.598758  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 30/120
	I0731 18:40:09.600237  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 31/120
	I0731 18:40:10.602475  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 32/120
	I0731 18:40:11.603923  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 33/120
	I0731 18:40:12.605628  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 34/120
	I0731 18:40:13.607538  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 35/120
	I0731 18:40:14.609509  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 36/120
	I0731 18:40:15.611778  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 37/120
	I0731 18:40:16.613824  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 38/120
	I0731 18:40:17.615426  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 39/120
	I0731 18:40:18.617280  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 40/120
	I0731 18:40:19.619048  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 41/120
	I0731 18:40:20.620455  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 42/120
	I0731 18:40:21.621902  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 43/120
	I0731 18:40:22.623472  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 44/120
	I0731 18:40:23.625534  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 45/120
	I0731 18:40:24.627315  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 46/120
	I0731 18:40:25.629661  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 47/120
	I0731 18:40:26.631026  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 48/120
	I0731 18:40:27.632429  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 49/120
	I0731 18:40:28.634631  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 50/120
	I0731 18:40:29.636167  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 51/120
	I0731 18:40:30.637520  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 52/120
	I0731 18:40:31.638929  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 53/120
	I0731 18:40:32.640267  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 54/120
	I0731 18:40:33.641667  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 55/120
	I0731 18:40:34.643176  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 56/120
	I0731 18:40:35.644702  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 57/120
	I0731 18:40:36.646836  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 58/120
	I0731 18:40:37.648270  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 59/120
	I0731 18:40:38.650502  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 60/120
	I0731 18:40:39.652088  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 61/120
	I0731 18:40:40.653544  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 62/120
	I0731 18:40:41.654892  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 63/120
	I0731 18:40:42.656316  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 64/120
	I0731 18:40:43.658438  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 65/120
	I0731 18:40:44.659902  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 66/120
	I0731 18:40:45.661298  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 67/120
	I0731 18:40:46.662883  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 68/120
	I0731 18:40:47.664898  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 69/120
	I0731 18:40:48.666178  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 70/120
	I0731 18:40:49.668298  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 71/120
	I0731 18:40:50.669748  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 72/120
	I0731 18:40:51.671651  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 73/120
	I0731 18:40:52.673474  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 74/120
	I0731 18:40:53.675226  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 75/120
	I0731 18:40:54.677062  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 76/120
	I0731 18:40:55.678435  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 77/120
	I0731 18:40:56.679810  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 78/120
	I0731 18:40:57.681185  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 79/120
	I0731 18:40:58.683318  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 80/120
	I0731 18:40:59.684981  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 81/120
	I0731 18:41:00.686517  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 82/120
	I0731 18:41:01.688409  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 83/120
	I0731 18:41:02.689791  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 84/120
	I0731 18:41:03.691565  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 85/120
	I0731 18:41:04.693288  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 86/120
	I0731 18:41:05.695348  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 87/120
	I0731 18:41:06.696892  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 88/120
	I0731 18:41:07.699135  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 89/120
	I0731 18:41:08.701422  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 90/120
	I0731 18:41:09.702933  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 91/120
	I0731 18:41:10.704726  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 92/120
	I0731 18:41:11.706853  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 93/120
	I0731 18:41:12.708299  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 94/120
	I0731 18:41:13.710577  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 95/120
	I0731 18:41:14.712214  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 96/120
	I0731 18:41:15.713717  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 97/120
	I0731 18:41:16.715144  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 98/120
	I0731 18:41:17.716906  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 99/120
	I0731 18:41:18.719356  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 100/120
	I0731 18:41:19.721168  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 101/120
	I0731 18:41:20.722693  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 102/120
	I0731 18:41:21.724113  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 103/120
	I0731 18:41:22.725611  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 104/120
	I0731 18:41:23.727184  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 105/120
	I0731 18:41:24.728880  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 106/120
	I0731 18:41:25.730176  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 107/120
	I0731 18:41:26.731732  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 108/120
	I0731 18:41:27.733537  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 109/120
	I0731 18:41:28.735615  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 110/120
	I0731 18:41:29.737689  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 111/120
	I0731 18:41:30.738887  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 112/120
	I0731 18:41:31.740636  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 113/120
	I0731 18:41:32.742063  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 114/120
	I0731 18:41:33.743946  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 115/120
	I0731 18:41:34.745547  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 116/120
	I0731 18:41:35.747123  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 117/120
	I0731 18:41:36.749002  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 118/120
	I0731 18:41:37.750554  418010 main.go:141] libmachine: (ha-326651-m02) Waiting for machine to stop 119/120
	I0731 18:41:38.751875  418010 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 18:41:38.752059  418010 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-326651 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (19.135203248s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:41:38.798724  418424 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:41:38.798845  418424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:41:38.798854  418424 out.go:304] Setting ErrFile to fd 2...
	I0731 18:41:38.798858  418424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:41:38.799033  418424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:41:38.799204  418424 out.go:298] Setting JSON to false
	I0731 18:41:38.799231  418424 mustload.go:65] Loading cluster: ha-326651
	I0731 18:41:38.799263  418424 notify.go:220] Checking for updates...
	I0731 18:41:38.799643  418424 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:41:38.799663  418424 status.go:255] checking status of ha-326651 ...
	I0731 18:41:38.800031  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:38.800096  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:38.815400  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35059
	I0731 18:41:38.815880  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:38.816576  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:38.816598  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:38.816941  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:38.817151  418424 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:41:38.819116  418424 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:41:38.819159  418424 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:41:38.819478  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:38.819522  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:38.834538  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I0731 18:41:38.835190  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:38.835825  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:38.835839  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:38.836106  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:38.836312  418424 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:41:38.839350  418424 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:41:38.839757  418424 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:41:38.839791  418424 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:41:38.839945  418424 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:41:38.840277  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:38.840327  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:38.855500  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45551
	I0731 18:41:38.855951  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:38.856523  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:38.856555  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:38.856857  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:38.857024  418424 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:41:38.857174  418424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:41:38.857212  418424 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:41:38.859850  418424 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:41:38.860269  418424 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:41:38.860297  418424 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:41:38.860481  418424 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:41:38.860650  418424 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:41:38.860883  418424 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:41:38.861157  418424 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:41:38.945767  418424 ssh_runner.go:195] Run: systemctl --version
	I0731 18:41:38.953737  418424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:41:38.972535  418424 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:41:38.972566  418424 api_server.go:166] Checking apiserver status ...
	I0731 18:41:38.972611  418424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:41:38.995041  418424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:41:39.005826  418424 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:41:39.005898  418424 ssh_runner.go:195] Run: ls
	I0731 18:41:39.011333  418424 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:41:39.017948  418424 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:41:39.017987  418424 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:41:39.017998  418424 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:41:39.018018  418424 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:41:39.018356  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:39.018398  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:39.034511  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41269
	I0731 18:41:39.034965  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:39.035488  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:39.035512  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:39.035914  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:39.036138  418424 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:41:39.038094  418424 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:41:39.038116  418424 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:41:39.038464  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:39.038524  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:39.054619  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34789
	I0731 18:41:39.055134  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:39.055713  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:39.055748  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:39.056184  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:39.056421  418424 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:41:39.059220  418424 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:41:39.059683  418424 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:41:39.059708  418424 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:41:39.059831  418424 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:41:39.060165  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:39.060222  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:39.076786  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42497
	I0731 18:41:39.077319  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:39.077878  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:39.077900  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:39.078245  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:39.078456  418424 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:41:39.078718  418424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:41:39.078743  418424 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:41:39.081581  418424 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:41:39.082039  418424 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:41:39.082103  418424 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:41:39.082220  418424 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:41:39.082398  418424 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:41:39.082565  418424 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:41:39.082735  418424 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	W0731 18:41:57.508730  418424 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:41:57.508892  418424 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0731 18:41:57.508919  418424 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:41:57.508931  418424 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:41:57.508956  418424 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:41:57.508967  418424 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:41:57.509435  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:57.509502  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:57.525519  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0731 18:41:57.526032  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:57.526606  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:57.526633  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:57.527013  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:57.527221  418424 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:41:57.529016  418424 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:41:57.529038  418424 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:41:57.529475  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:57.529531  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:57.545161  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46565
	I0731 18:41:57.545630  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:57.546117  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:57.546139  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:57.546442  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:57.546626  418424 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:41:57.549539  418424 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:41:57.549959  418424 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:41:57.549984  418424 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:41:57.550123  418424 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:41:57.550458  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:57.550496  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:57.565661  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35573
	I0731 18:41:57.566138  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:57.566585  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:57.566606  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:57.566969  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:57.567157  418424 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:41:57.567351  418424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:41:57.567374  418424 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:41:57.569874  418424 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:41:57.570294  418424 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:41:57.570331  418424 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:41:57.570424  418424 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:41:57.570631  418424 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:41:57.570771  418424 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:41:57.570936  418424 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:41:57.658291  418424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:41:57.677088  418424 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:41:57.677127  418424 api_server.go:166] Checking apiserver status ...
	I0731 18:41:57.677177  418424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:41:57.694305  418424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:41:57.704528  418424 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:41:57.704582  418424 ssh_runner.go:195] Run: ls
	I0731 18:41:57.709621  418424 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:41:57.714478  418424 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:41:57.714502  418424 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:41:57.714511  418424 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:41:57.714527  418424 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:41:57.714896  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:57.714946  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:57.730384  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
	I0731 18:41:57.730874  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:57.731398  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:57.731416  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:57.731745  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:57.731953  418424 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:41:57.733718  418424 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:41:57.733740  418424 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:41:57.734043  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:57.734088  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:57.749379  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36145
	I0731 18:41:57.749842  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:57.750332  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:57.750355  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:57.750748  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:57.750992  418424 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:41:57.754370  418424 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:41:57.754901  418424 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:41:57.754926  418424 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:41:57.755068  418424 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:41:57.755453  418424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:41:57.755502  418424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:41:57.771304  418424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0731 18:41:57.771706  418424 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:41:57.772208  418424 main.go:141] libmachine: Using API Version  1
	I0731 18:41:57.772251  418424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:41:57.772620  418424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:41:57.772849  418424 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:41:57.773104  418424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:41:57.773129  418424 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:41:57.776212  418424 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:41:57.776679  418424 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:41:57.776709  418424 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:41:57.776897  418424 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:41:57.777083  418424 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:41:57.777252  418424 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:41:57.777389  418424 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:41:57.869444  418424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:41:57.886987  418424 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326651 -n ha-326651
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-326651 logs -n 25: (1.515190014s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651:/home/docker/cp-test_ha-326651-m03_ha-326651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651 sudo cat                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m04 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp testdata/cp-test.txt                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651:/home/docker/cp-test_ha-326651-m04_ha-326651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651 sudo cat                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03:/home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m03 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-326651 node stop m02 -v=7                                                     | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:34:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:34:40.723848  413977 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:34:40.724353  413977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:40.724384  413977 out.go:304] Setting ErrFile to fd 2...
	I0731 18:34:40.724393  413977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:40.724879  413977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:34:40.725848  413977 out.go:298] Setting JSON to false
	I0731 18:34:40.726740  413977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8224,"bootTime":1722442657,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:34:40.726803  413977 start.go:139] virtualization: kvm guest
	I0731 18:34:40.728848  413977 out.go:177] * [ha-326651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:34:40.730458  413977 notify.go:220] Checking for updates...
	I0731 18:34:40.730468  413977 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:34:40.731857  413977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:34:40.733021  413977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:34:40.734226  413977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:40.735716  413977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:34:40.737064  413977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:34:40.738470  413977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:34:40.774904  413977 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 18:34:40.776272  413977 start.go:297] selected driver: kvm2
	I0731 18:34:40.776288  413977 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:34:40.776300  413977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:34:40.777003  413977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:34:40.777074  413977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:34:40.792816  413977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:34:40.792877  413977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:34:40.793118  413977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:34:40.793184  413977 cni.go:84] Creating CNI manager for ""
	I0731 18:34:40.793195  413977 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 18:34:40.793201  413977 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 18:34:40.793264  413977 start.go:340] cluster config:
	{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 18:34:40.793364  413977 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:34:40.795141  413977 out.go:177] * Starting "ha-326651" primary control-plane node in "ha-326651" cluster
	I0731 18:34:40.796525  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:34:40.796567  413977 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:34:40.796577  413977 cache.go:56] Caching tarball of preloaded images
	I0731 18:34:40.796664  413977 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:34:40.796674  413977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:34:40.796975  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:34:40.796993  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json: {Name:mk70ea6858e5325492e374713de5d9e959a0e0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:34:40.797122  413977 start.go:360] acquireMachinesLock for ha-326651: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:34:40.797149  413977 start.go:364] duration metric: took 15.324µs to acquireMachinesLock for "ha-326651"
	I0731 18:34:40.797166  413977 start.go:93] Provisioning new machine with config: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:34:40.797218  413977 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 18:34:40.798819  413977 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:34:40.798978  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:34:40.799025  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:34:40.813845  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I0731 18:34:40.814345  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:34:40.814896  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:34:40.814919  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:34:40.815368  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:34:40.815558  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:34:40.815739  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:34:40.815886  413977 start.go:159] libmachine.API.Create for "ha-326651" (driver="kvm2")
	I0731 18:34:40.815909  413977 client.go:168] LocalClient.Create starting
	I0731 18:34:40.815942  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:34:40.815978  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:34:40.815994  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:34:40.816067  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:34:40.816086  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:34:40.816099  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:34:40.816115  413977 main.go:141] libmachine: Running pre-create checks...
	I0731 18:34:40.816133  413977 main.go:141] libmachine: (ha-326651) Calling .PreCreateCheck
	I0731 18:34:40.816538  413977 main.go:141] libmachine: (ha-326651) Calling .GetConfigRaw
	I0731 18:34:40.816974  413977 main.go:141] libmachine: Creating machine...
	I0731 18:34:40.816991  413977 main.go:141] libmachine: (ha-326651) Calling .Create
	I0731 18:34:40.817124  413977 main.go:141] libmachine: (ha-326651) Creating KVM machine...
	I0731 18:34:40.818362  413977 main.go:141] libmachine: (ha-326651) DBG | found existing default KVM network
	I0731 18:34:40.819107  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:40.818971  414000 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0731 18:34:40.819130  413977 main.go:141] libmachine: (ha-326651) DBG | created network xml: 
	I0731 18:34:40.819163  413977 main.go:141] libmachine: (ha-326651) DBG | <network>
	I0731 18:34:40.819200  413977 main.go:141] libmachine: (ha-326651) DBG |   <name>mk-ha-326651</name>
	I0731 18:34:40.819214  413977 main.go:141] libmachine: (ha-326651) DBG |   <dns enable='no'/>
	I0731 18:34:40.819224  413977 main.go:141] libmachine: (ha-326651) DBG |   
	I0731 18:34:40.819237  413977 main.go:141] libmachine: (ha-326651) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 18:34:40.819247  413977 main.go:141] libmachine: (ha-326651) DBG |     <dhcp>
	I0731 18:34:40.819257  413977 main.go:141] libmachine: (ha-326651) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 18:34:40.819270  413977 main.go:141] libmachine: (ha-326651) DBG |     </dhcp>
	I0731 18:34:40.819283  413977 main.go:141] libmachine: (ha-326651) DBG |   </ip>
	I0731 18:34:40.819293  413977 main.go:141] libmachine: (ha-326651) DBG |   
	I0731 18:34:40.819303  413977 main.go:141] libmachine: (ha-326651) DBG | </network>
	I0731 18:34:40.819312  413977 main.go:141] libmachine: (ha-326651) DBG | 
	I0731 18:34:40.824475  413977 main.go:141] libmachine: (ha-326651) DBG | trying to create private KVM network mk-ha-326651 192.168.39.0/24...
	I0731 18:34:40.890090  413977 main.go:141] libmachine: (ha-326651) DBG | private KVM network mk-ha-326651 192.168.39.0/24 created
	I0731 18:34:40.890164  413977 main.go:141] libmachine: (ha-326651) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651 ...
	I0731 18:34:40.890192  413977 main.go:141] libmachine: (ha-326651) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:34:40.890228  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:40.890040  414000 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:40.890263  413977 main.go:141] libmachine: (ha-326651) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:34:41.157266  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:41.157110  414000 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa...
	I0731 18:34:41.217550  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:41.217377  414000 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/ha-326651.rawdisk...
	I0731 18:34:41.217595  413977 main.go:141] libmachine: (ha-326651) DBG | Writing magic tar header
	I0731 18:34:41.217609  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651 (perms=drwx------)
	I0731 18:34:41.217624  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:34:41.217637  413977 main.go:141] libmachine: (ha-326651) DBG | Writing SSH key tar header
	I0731 18:34:41.217644  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:41.217490  414000 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651 ...
	I0731 18:34:41.217662  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651
	I0731 18:34:41.217680  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:34:41.217699  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:41.217710  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:34:41.217720  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:34:41.217726  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:34:41.217736  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:34:41.217740  413977 main.go:141] libmachine: (ha-326651) Creating domain...
	I0731 18:34:41.217755  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:34:41.217764  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:34:41.217770  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:34:41.217777  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home
	I0731 18:34:41.217786  413977 main.go:141] libmachine: (ha-326651) DBG | Skipping /home - not owner
	I0731 18:34:41.218814  413977 main.go:141] libmachine: (ha-326651) define libvirt domain using xml: 
	I0731 18:34:41.218847  413977 main.go:141] libmachine: (ha-326651) <domain type='kvm'>
	I0731 18:34:41.218864  413977 main.go:141] libmachine: (ha-326651)   <name>ha-326651</name>
	I0731 18:34:41.218876  413977 main.go:141] libmachine: (ha-326651)   <memory unit='MiB'>2200</memory>
	I0731 18:34:41.218885  413977 main.go:141] libmachine: (ha-326651)   <vcpu>2</vcpu>
	I0731 18:34:41.218910  413977 main.go:141] libmachine: (ha-326651)   <features>
	I0731 18:34:41.218920  413977 main.go:141] libmachine: (ha-326651)     <acpi/>
	I0731 18:34:41.218927  413977 main.go:141] libmachine: (ha-326651)     <apic/>
	I0731 18:34:41.218935  413977 main.go:141] libmachine: (ha-326651)     <pae/>
	I0731 18:34:41.218946  413977 main.go:141] libmachine: (ha-326651)     
	I0731 18:34:41.218970  413977 main.go:141] libmachine: (ha-326651)   </features>
	I0731 18:34:41.218991  413977 main.go:141] libmachine: (ha-326651)   <cpu mode='host-passthrough'>
	I0731 18:34:41.218999  413977 main.go:141] libmachine: (ha-326651)   
	I0731 18:34:41.219010  413977 main.go:141] libmachine: (ha-326651)   </cpu>
	I0731 18:34:41.219020  413977 main.go:141] libmachine: (ha-326651)   <os>
	I0731 18:34:41.219027  413977 main.go:141] libmachine: (ha-326651)     <type>hvm</type>
	I0731 18:34:41.219036  413977 main.go:141] libmachine: (ha-326651)     <boot dev='cdrom'/>
	I0731 18:34:41.219041  413977 main.go:141] libmachine: (ha-326651)     <boot dev='hd'/>
	I0731 18:34:41.219046  413977 main.go:141] libmachine: (ha-326651)     <bootmenu enable='no'/>
	I0731 18:34:41.219050  413977 main.go:141] libmachine: (ha-326651)   </os>
	I0731 18:34:41.219055  413977 main.go:141] libmachine: (ha-326651)   <devices>
	I0731 18:34:41.219063  413977 main.go:141] libmachine: (ha-326651)     <disk type='file' device='cdrom'>
	I0731 18:34:41.219073  413977 main.go:141] libmachine: (ha-326651)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/boot2docker.iso'/>
	I0731 18:34:41.219084  413977 main.go:141] libmachine: (ha-326651)       <target dev='hdc' bus='scsi'/>
	I0731 18:34:41.219101  413977 main.go:141] libmachine: (ha-326651)       <readonly/>
	I0731 18:34:41.219129  413977 main.go:141] libmachine: (ha-326651)     </disk>
	I0731 18:34:41.219141  413977 main.go:141] libmachine: (ha-326651)     <disk type='file' device='disk'>
	I0731 18:34:41.219153  413977 main.go:141] libmachine: (ha-326651)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:34:41.219172  413977 main.go:141] libmachine: (ha-326651)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/ha-326651.rawdisk'/>
	I0731 18:34:41.219178  413977 main.go:141] libmachine: (ha-326651)       <target dev='hda' bus='virtio'/>
	I0731 18:34:41.219196  413977 main.go:141] libmachine: (ha-326651)     </disk>
	I0731 18:34:41.219203  413977 main.go:141] libmachine: (ha-326651)     <interface type='network'>
	I0731 18:34:41.219213  413977 main.go:141] libmachine: (ha-326651)       <source network='mk-ha-326651'/>
	I0731 18:34:41.219220  413977 main.go:141] libmachine: (ha-326651)       <model type='virtio'/>
	I0731 18:34:41.219225  413977 main.go:141] libmachine: (ha-326651)     </interface>
	I0731 18:34:41.219230  413977 main.go:141] libmachine: (ha-326651)     <interface type='network'>
	I0731 18:34:41.219248  413977 main.go:141] libmachine: (ha-326651)       <source network='default'/>
	I0731 18:34:41.219268  413977 main.go:141] libmachine: (ha-326651)       <model type='virtio'/>
	I0731 18:34:41.219280  413977 main.go:141] libmachine: (ha-326651)     </interface>
	I0731 18:34:41.219290  413977 main.go:141] libmachine: (ha-326651)     <serial type='pty'>
	I0731 18:34:41.219301  413977 main.go:141] libmachine: (ha-326651)       <target port='0'/>
	I0731 18:34:41.219311  413977 main.go:141] libmachine: (ha-326651)     </serial>
	I0731 18:34:41.219326  413977 main.go:141] libmachine: (ha-326651)     <console type='pty'>
	I0731 18:34:41.219341  413977 main.go:141] libmachine: (ha-326651)       <target type='serial' port='0'/>
	I0731 18:34:41.219350  413977 main.go:141] libmachine: (ha-326651)     </console>
	I0731 18:34:41.219357  413977 main.go:141] libmachine: (ha-326651)     <rng model='virtio'>
	I0731 18:34:41.219367  413977 main.go:141] libmachine: (ha-326651)       <backend model='random'>/dev/random</backend>
	I0731 18:34:41.219374  413977 main.go:141] libmachine: (ha-326651)     </rng>
	I0731 18:34:41.219382  413977 main.go:141] libmachine: (ha-326651)     
	I0731 18:34:41.219388  413977 main.go:141] libmachine: (ha-326651)     
	I0731 18:34:41.219396  413977 main.go:141] libmachine: (ha-326651)   </devices>
	I0731 18:34:41.219406  413977 main.go:141] libmachine: (ha-326651) </domain>
	I0731 18:34:41.219420  413977 main.go:141] libmachine: (ha-326651) 
	I0731 18:34:41.223555  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:ee:73:0f in network default
	I0731 18:34:41.224056  413977 main.go:141] libmachine: (ha-326651) Ensuring networks are active...
	I0731 18:34:41.224078  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:41.224700  413977 main.go:141] libmachine: (ha-326651) Ensuring network default is active
	I0731 18:34:41.224971  413977 main.go:141] libmachine: (ha-326651) Ensuring network mk-ha-326651 is active
	I0731 18:34:41.225395  413977 main.go:141] libmachine: (ha-326651) Getting domain xml...
	I0731 18:34:41.226030  413977 main.go:141] libmachine: (ha-326651) Creating domain...
	I0731 18:34:42.424439  413977 main.go:141] libmachine: (ha-326651) Waiting to get IP...
	I0731 18:34:42.425190  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:42.425634  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:42.425656  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:42.425576  414000 retry.go:31] will retry after 203.424539ms: waiting for machine to come up
	I0731 18:34:42.631245  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:42.631765  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:42.631794  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:42.631718  414000 retry.go:31] will retry after 387.742735ms: waiting for machine to come up
	I0731 18:34:43.021313  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:43.021797  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:43.021827  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:43.021745  414000 retry.go:31] will retry after 469.359884ms: waiting for machine to come up
	I0731 18:34:43.492410  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:43.493086  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:43.493110  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:43.493000  414000 retry.go:31] will retry after 395.781269ms: waiting for machine to come up
	I0731 18:34:43.890674  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:43.891079  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:43.891100  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:43.891035  414000 retry.go:31] will retry after 734.285922ms: waiting for machine to come up
	I0731 18:34:44.626848  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:44.627387  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:44.627420  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:44.627317  414000 retry.go:31] will retry after 862.205057ms: waiting for machine to come up
	I0731 18:34:45.491435  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:45.491917  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:45.491947  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:45.491846  414000 retry.go:31] will retry after 1.106594488s: waiting for machine to come up
	I0731 18:34:46.599797  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:46.600340  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:46.600396  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:46.600270  414000 retry.go:31] will retry after 1.454701519s: waiting for machine to come up
	I0731 18:34:48.057051  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:48.057432  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:48.057458  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:48.057376  414000 retry.go:31] will retry after 1.796635335s: waiting for machine to come up
	I0731 18:34:49.856244  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:49.856665  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:49.856691  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:49.856622  414000 retry.go:31] will retry after 1.762364281s: waiting for machine to come up
	I0731 18:34:51.620624  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:51.621132  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:51.621169  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:51.621059  414000 retry.go:31] will retry after 2.662012393s: waiting for machine to come up
	I0731 18:34:54.286074  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:54.286542  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:54.286567  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:54.286494  414000 retry.go:31] will retry after 3.629071767s: waiting for machine to come up
	I0731 18:34:57.917456  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:57.917985  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:57.918010  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:57.917960  414000 retry.go:31] will retry after 3.371083275s: waiting for machine to come up
	I0731 18:35:01.290529  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.291019  413977 main.go:141] libmachine: (ha-326651) Found IP for machine: 192.168.39.220
	I0731 18:35:01.291048  413977 main.go:141] libmachine: (ha-326651) Reserving static IP address...
	I0731 18:35:01.291061  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has current primary IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.291356  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find host DHCP lease matching {name: "ha-326651", mac: "52:54:00:eb:7a:d3", ip: "192.168.39.220"} in network mk-ha-326651
	I0731 18:35:01.367167  413977 main.go:141] libmachine: (ha-326651) DBG | Getting to WaitForSSH function...
	I0731 18:35:01.367205  413977 main.go:141] libmachine: (ha-326651) Reserved static IP address: 192.168.39.220
	I0731 18:35:01.367234  413977 main.go:141] libmachine: (ha-326651) Waiting for SSH to be available...
	I0731 18:35:01.370021  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.370436  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.370469  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.370729  413977 main.go:141] libmachine: (ha-326651) DBG | Using SSH client type: external
	I0731 18:35:01.370754  413977 main.go:141] libmachine: (ha-326651) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa (-rw-------)
	I0731 18:35:01.370871  413977 main.go:141] libmachine: (ha-326651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:35:01.370902  413977 main.go:141] libmachine: (ha-326651) DBG | About to run SSH command:
	I0731 18:35:01.370916  413977 main.go:141] libmachine: (ha-326651) DBG | exit 0
	I0731 18:35:01.500352  413977 main.go:141] libmachine: (ha-326651) DBG | SSH cmd err, output: <nil>: 
	I0731 18:35:01.500653  413977 main.go:141] libmachine: (ha-326651) KVM machine creation complete!
	I0731 18:35:01.501052  413977 main.go:141] libmachine: (ha-326651) Calling .GetConfigRaw
	I0731 18:35:01.501680  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:01.501926  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:01.502099  413977 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:35:01.502116  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:01.503604  413977 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:35:01.503622  413977 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:35:01.503629  413977 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:35:01.503638  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.506124  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.506582  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.506611  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.506716  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.506897  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.507096  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.507234  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.507398  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.507653  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.507665  413977 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:35:01.611686  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:01.611713  413977 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:35:01.611721  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.614365  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.614780  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.614812  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.615001  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.615218  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.615364  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.615498  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.615680  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.615869  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.615882  413977 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:35:01.725461  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:35:01.725589  413977 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:35:01.725602  413977 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:35:01.725610  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:35:01.725916  413977 buildroot.go:166] provisioning hostname "ha-326651"
	I0731 18:35:01.725942  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:35:01.726128  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.729355  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.729674  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.729702  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.729898  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.730090  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.730270  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.730414  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.730596  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.730786  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.730802  413977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651 && echo "ha-326651" | sudo tee /etc/hostname
	I0731 18:35:01.851718  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651
	
	I0731 18:35:01.851743  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.855156  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.855427  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.855488  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.855698  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.856028  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.856221  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.856452  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.856652  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.856824  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.856840  413977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:35:01.970596  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:01.970634  413977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:35:01.970687  413977 buildroot.go:174] setting up certificates
	I0731 18:35:01.970698  413977 provision.go:84] configureAuth start
	I0731 18:35:01.970710  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:35:01.971058  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:01.974089  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.974436  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.974466  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.974709  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.976967  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.977265  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.977285  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.977458  413977 provision.go:143] copyHostCerts
	I0731 18:35:01.977493  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:01.977532  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:35:01.977541  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:01.977609  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:35:01.977684  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:01.977709  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:35:01.977716  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:01.977740  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:35:01.977780  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:01.977805  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:35:01.977811  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:01.977837  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:35:01.977887  413977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651 san=[127.0.0.1 192.168.39.220 ha-326651 localhost minikube]
	I0731 18:35:02.430845  413977 provision.go:177] copyRemoteCerts
	I0731 18:35:02.430916  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:35:02.430944  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:02.434564  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.434904  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.434935  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.435091  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:02.435332  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.435498  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:02.435619  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:02.520471  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:35:02.520541  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:35:02.546707  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:35:02.546778  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 18:35:02.572855  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:35:02.572944  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:35:02.596466  413977 provision.go:87] duration metric: took 625.753635ms to configureAuth
	I0731 18:35:02.596499  413977 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:35:02.596755  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:02.596883  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:02.599585  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.599944  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.599974  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.600170  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:02.600371  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.600656  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.600812  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:02.601011  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:02.601178  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:02.601195  413977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:35:02.881432  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:35:02.881464  413977 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:35:02.881472  413977 main.go:141] libmachine: (ha-326651) Calling .GetURL
	I0731 18:35:02.882773  413977 main.go:141] libmachine: (ha-326651) DBG | Using libvirt version 6000000
	I0731 18:35:02.885020  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.885340  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.885370  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.885550  413977 main.go:141] libmachine: Docker is up and running!
	I0731 18:35:02.885564  413977 main.go:141] libmachine: Reticulating splines...
	I0731 18:35:02.885571  413977 client.go:171] duration metric: took 22.069652293s to LocalClient.Create
	I0731 18:35:02.885591  413977 start.go:167] duration metric: took 22.069706495s to libmachine.API.Create "ha-326651"
	I0731 18:35:02.885601  413977 start.go:293] postStartSetup for "ha-326651" (driver="kvm2")
	I0731 18:35:02.885610  413977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:35:02.885630  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:02.885895  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:35:02.885927  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:02.887911  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.888288  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.888312  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.888522  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:02.888758  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.888942  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:02.889173  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:02.971215  413977 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:35:02.975448  413977 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:35:02.975480  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:35:02.975561  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:35:02.975633  413977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:35:02.975644  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:35:02.975738  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:35:02.985721  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:35:03.009493  413977 start.go:296] duration metric: took 123.872449ms for postStartSetup
	I0731 18:35:03.009567  413977 main.go:141] libmachine: (ha-326651) Calling .GetConfigRaw
	I0731 18:35:03.010351  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:03.012910  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.013238  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.013270  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.013497  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:03.013674  413977 start.go:128] duration metric: took 22.216446388s to createHost
	I0731 18:35:03.013697  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:03.016116  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.016468  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.016500  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.016604  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:03.016796  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.016961  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.017101  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:03.017279  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:03.017448  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:03.017459  413977 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:35:03.125237  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722450903.098801256
	
	I0731 18:35:03.125270  413977 fix.go:216] guest clock: 1722450903.098801256
	I0731 18:35:03.125281  413977 fix.go:229] Guest: 2024-07-31 18:35:03.098801256 +0000 UTC Remote: 2024-07-31 18:35:03.013686749 +0000 UTC m=+22.326991001 (delta=85.114507ms)
	I0731 18:35:03.125331  413977 fix.go:200] guest clock delta is within tolerance: 85.114507ms
	I0731 18:35:03.125337  413977 start.go:83] releasing machines lock for "ha-326651", held for 22.328179384s
	I0731 18:35:03.125363  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.125651  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:03.128266  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.128568  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.128600  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.128767  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.129351  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.129506  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.129638  413977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:35:03.129693  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:03.129731  413977 ssh_runner.go:195] Run: cat /version.json
	I0731 18:35:03.129751  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:03.132297  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132549  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132650  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.132676  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132788  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:03.132898  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.132931  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132973  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.133150  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:03.133150  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:03.133352  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.133338  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:03.133507  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:03.133652  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:03.209618  413977 ssh_runner.go:195] Run: systemctl --version
	I0731 18:35:03.235915  413977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:35:03.400504  413977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:35:03.406467  413977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:35:03.406541  413977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:35:03.424193  413977 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:35:03.424225  413977 start.go:495] detecting cgroup driver to use...
	I0731 18:35:03.424297  413977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:35:03.440499  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:35:03.455446  413977 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:35:03.455510  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:35:03.470288  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:35:03.485030  413977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:35:03.608058  413977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:35:03.748606  413977 docker.go:233] disabling docker service ...
	I0731 18:35:03.748688  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:35:03.763497  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:35:03.776956  413977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:35:03.912903  413977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:35:04.051609  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:35:04.065775  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:35:04.084878  413977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:35:04.084944  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.095985  413977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:35:04.096053  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.107308  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.118206  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.129146  413977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:35:04.140002  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.151131  413977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.169345  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.180308  413977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:35:04.189948  413977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:35:04.190016  413977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:35:04.203308  413977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:35:04.213074  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:35:04.339820  413977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:35:04.473005  413977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:35:04.473089  413977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:35:04.478202  413977 start.go:563] Will wait 60s for crictl version
	I0731 18:35:04.478277  413977 ssh_runner.go:195] Run: which crictl
	I0731 18:35:04.482124  413977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:35:04.521550  413977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:35:04.521644  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:35:04.550817  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:35:04.582275  413977 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:35:04.583668  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:04.586549  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:04.586860  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:04.586886  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:04.587161  413977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:35:04.591521  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:35:04.605116  413977 kubeadm.go:883] updating cluster {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:35:04.605253  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:35:04.605299  413977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:35:04.635944  413977 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:35:04.636021  413977 ssh_runner.go:195] Run: which lz4
	I0731 18:35:04.639922  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 18:35:04.640026  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:35:04.644215  413977 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:35:04.644249  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:35:06.093375  413977 crio.go:462] duration metric: took 1.45338213s to copy over tarball
	I0731 18:35:06.093466  413977 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:35:08.282574  413977 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.189071604s)
	I0731 18:35:08.282615  413977 crio.go:469] duration metric: took 2.189201764s to extract the tarball
	I0731 18:35:08.282625  413977 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:35:08.320900  413977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:35:08.369264  413977 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:35:08.369292  413977 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:35:08.369300  413977 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.30.3 crio true true} ...
	I0731 18:35:08.369418  413977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:35:08.369484  413977 ssh_runner.go:195] Run: crio config
	I0731 18:35:08.412904  413977 cni.go:84] Creating CNI manager for ""
	I0731 18:35:08.412927  413977 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 18:35:08.412936  413977 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:35:08.412958  413977 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326651 NodeName:ha-326651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:35:08.413112  413977 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-326651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:35:08.413142  413977 kube-vip.go:115] generating kube-vip config ...
	I0731 18:35:08.413185  413977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:35:08.429915  413977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:35:08.430038  413977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:35:08.430115  413977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:35:08.440735  413977 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:35:08.440834  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 18:35:08.453582  413977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 18:35:08.471025  413977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:35:08.487472  413977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 18:35:08.503698  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 18:35:08.520358  413977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:35:08.524197  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:35:08.535926  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:35:08.662409  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:35:08.680546  413977 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.220
	I0731 18:35:08.680577  413977 certs.go:194] generating shared ca certs ...
	I0731 18:35:08.680599  413977 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.680776  413977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:35:08.680838  413977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:35:08.680852  413977 certs.go:256] generating profile certs ...
	I0731 18:35:08.680923  413977 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:35:08.680943  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt with IP's: []
	I0731 18:35:08.813901  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt ...
	I0731 18:35:08.813932  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt: {Name:mkbf29d30b87ac9344f189deb736c1c30a7f569f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.814140  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key ...
	I0731 18:35:08.814156  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key: {Name:mk1aeab75fd0a97151206c81270c992b7289ce8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.814259  413977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03
	I0731 18:35:08.814281  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.254]
	I0731 18:35:08.871436  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03 ...
	I0731 18:35:08.871467  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03: {Name:mk0deec4f68a942a46259c6f72337b1840b5b859 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.871656  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03 ...
	I0731 18:35:08.871680  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03: {Name:mk239c1e471661396ec00ed8f27be84a4272e488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.871776  413977 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:35:08.871872  413977 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:35:08.871965  413977 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:35:08.871986  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt with IP's: []
	I0731 18:35:09.107578  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt ...
	I0731 18:35:09.107620  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt: {Name:mkfc63cb0330ae66e4cefacb0c34de64236dfcfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:09.107856  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key ...
	I0731 18:35:09.107880  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key: {Name:mk87e0227c814176c96ddf4f3b22cd65cbfe3820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:09.107994  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:35:09.108023  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:35:09.108048  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:35:09.108071  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:35:09.108090  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:35:09.108111  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:35:09.108135  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:35:09.108157  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:35:09.108246  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:35:09.108301  413977 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:35:09.108326  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:35:09.108371  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:35:09.108436  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:35:09.108470  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:35:09.108550  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:35:09.108605  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.108630  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.108648  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.109342  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:35:09.135205  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:35:09.160752  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:35:09.186825  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:35:09.210864  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:35:09.235888  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:35:09.261260  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:35:09.285716  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:35:09.309506  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:35:09.333047  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:35:09.358243  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:35:09.383029  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:35:09.404779  413977 ssh_runner.go:195] Run: openssl version
	I0731 18:35:09.412050  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:35:09.422976  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.427487  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.427556  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.433425  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:35:09.447663  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:35:09.466955  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.473257  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.473326  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.479726  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:35:09.494415  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:35:09.511247  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.516731  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.516795  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.522679  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:35:09.534570  413977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:35:09.538933  413977 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:35:09.539000  413977 kubeadm.go:392] StartCluster: {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:35:09.539113  413977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:35:09.539186  413977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:35:09.580544  413977 cri.go:89] found id: ""
	I0731 18:35:09.580617  413977 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:35:09.591282  413977 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:35:09.601767  413977 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:35:09.612363  413977 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:35:09.612408  413977 kubeadm.go:157] found existing configuration files:
	
	I0731 18:35:09.612476  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:35:09.622972  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:35:09.623028  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:35:09.633010  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:35:09.644050  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:35:09.644194  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:35:09.655060  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:35:09.665410  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:35:09.665479  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:35:09.675773  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:35:09.685542  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:35:09.685616  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:35:09.696050  413977 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:35:09.807439  413977 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:35:09.807520  413977 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:35:09.947531  413977 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:35:09.947663  413977 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:35:09.947822  413977 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:35:10.156890  413977 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:35:10.376045  413977 out.go:204]   - Generating certificates and keys ...
	I0731 18:35:10.376178  413977 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:35:10.376260  413977 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:35:10.376366  413977 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 18:35:10.667880  413977 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 18:35:10.788539  413977 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 18:35:10.999419  413977 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 18:35:11.412365  413977 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 18:35:11.412592  413977 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326651 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0731 18:35:11.691430  413977 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 18:35:11.691686  413977 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326651 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0731 18:35:11.748202  413977 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 18:35:11.849775  413977 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 18:35:12.073145  413977 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 18:35:12.073280  413977 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:35:12.218887  413977 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:35:12.334397  413977 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:35:12.435537  413977 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:35:12.601773  413977 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:35:12.765403  413977 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:35:12.766022  413977 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:35:12.768970  413977 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:35:12.771208  413977 out.go:204]   - Booting up control plane ...
	I0731 18:35:12.771324  413977 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:35:12.771445  413977 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:35:12.771526  413977 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:35:12.788777  413977 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:35:12.788896  413977 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:35:12.788933  413977 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:35:12.947583  413977 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:35:12.947718  413977 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:35:13.449322  413977 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.280738ms
	I0731 18:35:13.449442  413977 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:35:19.542500  413977 kubeadm.go:310] [api-check] The API server is healthy after 6.096981603s
	I0731 18:35:19.556522  413977 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:35:19.572914  413977 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:35:19.597409  413977 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:35:19.597654  413977 kubeadm.go:310] [mark-control-plane] Marking the node ha-326651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:35:19.609399  413977 kubeadm.go:310] [bootstrap-token] Using token: mjwpqc.cas5affjevm676c6
	I0731 18:35:19.610932  413977 out.go:204]   - Configuring RBAC rules ...
	I0731 18:35:19.611041  413977 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:35:19.616009  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:35:19.623171  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:35:19.626030  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:35:19.632949  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:35:19.639026  413977 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:35:19.952644  413977 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:35:20.389795  413977 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:35:20.950692  413977 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:35:20.952871  413977 kubeadm.go:310] 
	I0731 18:35:20.952930  413977 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:35:20.952936  413977 kubeadm.go:310] 
	I0731 18:35:20.953046  413977 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:35:20.953066  413977 kubeadm.go:310] 
	I0731 18:35:20.953115  413977 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:35:20.953189  413977 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:35:20.953268  413977 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:35:20.953276  413977 kubeadm.go:310] 
	I0731 18:35:20.953319  413977 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:35:20.953337  413977 kubeadm.go:310] 
	I0731 18:35:20.953406  413977 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:35:20.953418  413977 kubeadm.go:310] 
	I0731 18:35:20.953489  413977 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:35:20.953608  413977 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:35:20.953719  413977 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:35:20.953729  413977 kubeadm.go:310] 
	I0731 18:35:20.953844  413977 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:35:20.953966  413977 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:35:20.953978  413977 kubeadm.go:310] 
	I0731 18:35:20.954100  413977 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mjwpqc.cas5affjevm676c6 \
	I0731 18:35:20.954199  413977 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd \
	I0731 18:35:20.954232  413977 kubeadm.go:310] 	--control-plane 
	I0731 18:35:20.954247  413977 kubeadm.go:310] 
	I0731 18:35:20.954353  413977 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:35:20.954363  413977 kubeadm.go:310] 
	I0731 18:35:20.954462  413977 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mjwpqc.cas5affjevm676c6 \
	I0731 18:35:20.954585  413977 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd 
	I0731 18:35:20.954917  413977 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:35:20.955067  413977 cni.go:84] Creating CNI manager for ""
	I0731 18:35:20.955084  413977 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 18:35:20.956822  413977 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 18:35:20.958071  413977 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 18:35:20.963420  413977 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 18:35:20.963436  413977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 18:35:20.982042  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 18:35:21.388053  413977 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:35:21.388123  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:21.388137  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326651 minikube.k8s.io/updated_at=2024_07_31T18_35_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=ha-326651 minikube.k8s.io/primary=true
	I0731 18:35:21.522149  413977 ops.go:34] apiserver oom_adj: -16
	I0731 18:35:21.522177  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:22.022868  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:22.523244  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:23.022569  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:23.522992  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:24.022962  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:24.522691  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:25.022840  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:25.522450  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:26.022963  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:26.523135  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:27.022447  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:27.522648  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:28.022393  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:28.522782  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:29.022914  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:29.522590  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:30.022599  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:30.522458  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:31.023101  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:31.522322  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:32.022812  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:32.523180  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:33.023246  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:33.522836  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:34.023049  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:34.193870  413977 kubeadm.go:1113] duration metric: took 12.805818515s to wait for elevateKubeSystemPrivileges
	I0731 18:35:34.193915  413977 kubeadm.go:394] duration metric: took 24.654920078s to StartCluster
	I0731 18:35:34.193941  413977 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:34.194037  413977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:35:34.194906  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:34.195173  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 18:35:34.195233  413977 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:35:34.195269  413977 start.go:241] waiting for startup goroutines ...
	I0731 18:35:34.195279  413977 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:35:34.195343  413977 addons.go:69] Setting storage-provisioner=true in profile "ha-326651"
	I0731 18:35:34.195356  413977 addons.go:69] Setting default-storageclass=true in profile "ha-326651"
	I0731 18:35:34.195383  413977 addons.go:234] Setting addon storage-provisioner=true in "ha-326651"
	I0731 18:35:34.195391  413977 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326651"
	I0731 18:35:34.195416  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:35:34.195479  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:34.195798  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.195824  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.195886  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.195924  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.211396  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0731 18:35:34.211486  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0731 18:35:34.211858  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.212018  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.212400  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.212423  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.212557  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.212580  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.212749  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.212937  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.213165  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:34.213310  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.213336  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.215291  413977 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:35:34.215524  413977 kapi.go:59] client config for ha-326651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 18:35:34.215965  413977 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 18:35:34.216093  413977 addons.go:234] Setting addon default-storageclass=true in "ha-326651"
	I0731 18:35:34.216145  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:35:34.216416  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.216456  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.229783  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35013
	I0731 18:35:34.230417  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.230979  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.231004  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.231357  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.231363  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0731 18:35:34.231603  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:34.231750  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.232266  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.232293  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.232677  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.233298  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.233331  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.233510  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:34.235939  413977 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:35:34.237554  413977 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:35:34.237581  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:35:34.237606  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:34.240759  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.241209  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:34.241235  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.241401  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:34.241596  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:34.241765  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:34.241903  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:34.254394  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I0731 18:35:34.254910  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.255362  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.255385  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.255779  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.256046  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:34.257874  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:34.258158  413977 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:35:34.258179  413977 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:35:34.258202  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:34.261652  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.262186  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:34.262214  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.262389  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:34.262567  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:34.262734  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:34.262870  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:34.420261  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 18:35:34.503010  413977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:35:34.513515  413977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:35:34.864751  413977 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 18:35:35.137522  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137552  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.137549  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137624  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.137853  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.137858  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.137869  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.137880  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.137884  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137889  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137892  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.137897  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.138114  413977 main.go:141] libmachine: (ha-326651) DBG | Closing plugin on server side
	I0731 18:35:35.138127  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.138140  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.138148  413977 main.go:141] libmachine: (ha-326651) DBG | Closing plugin on server side
	I0731 18:35:35.138210  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.138246  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.138363  413977 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 18:35:35.138374  413977 round_trippers.go:469] Request Headers:
	I0731 18:35:35.138384  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:35:35.138390  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:35:35.153319  413977 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 18:35:35.153970  413977 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 18:35:35.153984  413977 round_trippers.go:469] Request Headers:
	I0731 18:35:35.153995  413977 round_trippers.go:473]     Content-Type: application/json
	I0731 18:35:35.154005  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:35:35.154012  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:35:35.157091  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:35:35.157373  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.157391  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.157720  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.157740  413977 main.go:141] libmachine: (ha-326651) DBG | Closing plugin on server side
	I0731 18:35:35.157747  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.159573  413977 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 18:35:35.160843  413977 addons.go:510] duration metric: took 965.561242ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 18:35:35.160881  413977 start.go:246] waiting for cluster config update ...
	I0731 18:35:35.160896  413977 start.go:255] writing updated cluster config ...
	I0731 18:35:35.162705  413977 out.go:177] 
	I0731 18:35:35.164248  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:35.164331  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:35.165987  413977 out.go:177] * Starting "ha-326651-m02" control-plane node in "ha-326651" cluster
	I0731 18:35:35.167212  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:35:35.167231  413977 cache.go:56] Caching tarball of preloaded images
	I0731 18:35:35.167318  413977 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:35:35.167329  413977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:35:35.167389  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:35.167534  413977 start.go:360] acquireMachinesLock for ha-326651-m02: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:35:35.167575  413977 start.go:364] duration metric: took 22.182µs to acquireMachinesLock for "ha-326651-m02"
	I0731 18:35:35.167592  413977 start.go:93] Provisioning new machine with config: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:35:35.167663  413977 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 18:35:35.169251  413977 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:35:35.169333  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:35.169357  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:35.183815  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0731 18:35:35.184191  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:35.184697  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:35.184723  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:35.185029  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:35.185228  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:35.185369  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:35.185497  413977 start.go:159] libmachine.API.Create for "ha-326651" (driver="kvm2")
	I0731 18:35:35.185520  413977 client.go:168] LocalClient.Create starting
	I0731 18:35:35.185552  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:35:35.185590  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:35:35.185610  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:35:35.185681  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:35:35.185707  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:35:35.185723  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:35:35.185751  413977 main.go:141] libmachine: Running pre-create checks...
	I0731 18:35:35.185764  413977 main.go:141] libmachine: (ha-326651-m02) Calling .PreCreateCheck
	I0731 18:35:35.185933  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetConfigRaw
	I0731 18:35:35.186320  413977 main.go:141] libmachine: Creating machine...
	I0731 18:35:35.186336  413977 main.go:141] libmachine: (ha-326651-m02) Calling .Create
	I0731 18:35:35.186440  413977 main.go:141] libmachine: (ha-326651-m02) Creating KVM machine...
	I0731 18:35:35.187617  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found existing default KVM network
	I0731 18:35:35.187723  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found existing private KVM network mk-ha-326651
	I0731 18:35:35.187863  413977 main.go:141] libmachine: (ha-326651-m02) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02 ...
	I0731 18:35:35.187884  413977 main.go:141] libmachine: (ha-326651-m02) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:35:35.187968  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.187853  414340 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:35:35.188055  413977 main.go:141] libmachine: (ha-326651-m02) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:35:35.446325  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.446211  414340 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa...
	I0731 18:35:35.643707  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.643548  414340 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/ha-326651-m02.rawdisk...
	I0731 18:35:35.643751  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Writing magic tar header
	I0731 18:35:35.643768  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Writing SSH key tar header
	I0731 18:35:35.643782  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.643705  414340 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02 ...
	I0731 18:35:35.643867  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02
	I0731 18:35:35.643929  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:35:35.643945  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02 (perms=drwx------)
	I0731 18:35:35.643966  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:35:35.643978  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:35:35.643989  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:35:35.644003  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:35:35.644017  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:35:35.644030  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:35:35.644039  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:35:35.644054  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:35:35.644064  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home
	I0731 18:35:35.644078  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:35:35.644091  413977 main.go:141] libmachine: (ha-326651-m02) Creating domain...
	I0731 18:35:35.644107  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Skipping /home - not owner
	I0731 18:35:35.645029  413977 main.go:141] libmachine: (ha-326651-m02) define libvirt domain using xml: 
	I0731 18:35:35.645048  413977 main.go:141] libmachine: (ha-326651-m02) <domain type='kvm'>
	I0731 18:35:35.645056  413977 main.go:141] libmachine: (ha-326651-m02)   <name>ha-326651-m02</name>
	I0731 18:35:35.645066  413977 main.go:141] libmachine: (ha-326651-m02)   <memory unit='MiB'>2200</memory>
	I0731 18:35:35.645075  413977 main.go:141] libmachine: (ha-326651-m02)   <vcpu>2</vcpu>
	I0731 18:35:35.645082  413977 main.go:141] libmachine: (ha-326651-m02)   <features>
	I0731 18:35:35.645090  413977 main.go:141] libmachine: (ha-326651-m02)     <acpi/>
	I0731 18:35:35.645100  413977 main.go:141] libmachine: (ha-326651-m02)     <apic/>
	I0731 18:35:35.645107  413977 main.go:141] libmachine: (ha-326651-m02)     <pae/>
	I0731 18:35:35.645114  413977 main.go:141] libmachine: (ha-326651-m02)     
	I0731 18:35:35.645119  413977 main.go:141] libmachine: (ha-326651-m02)   </features>
	I0731 18:35:35.645126  413977 main.go:141] libmachine: (ha-326651-m02)   <cpu mode='host-passthrough'>
	I0731 18:35:35.645131  413977 main.go:141] libmachine: (ha-326651-m02)   
	I0731 18:35:35.645141  413977 main.go:141] libmachine: (ha-326651-m02)   </cpu>
	I0731 18:35:35.645189  413977 main.go:141] libmachine: (ha-326651-m02)   <os>
	I0731 18:35:35.645219  413977 main.go:141] libmachine: (ha-326651-m02)     <type>hvm</type>
	I0731 18:35:35.645228  413977 main.go:141] libmachine: (ha-326651-m02)     <boot dev='cdrom'/>
	I0731 18:35:35.645238  413977 main.go:141] libmachine: (ha-326651-m02)     <boot dev='hd'/>
	I0731 18:35:35.645245  413977 main.go:141] libmachine: (ha-326651-m02)     <bootmenu enable='no'/>
	I0731 18:35:35.645252  413977 main.go:141] libmachine: (ha-326651-m02)   </os>
	I0731 18:35:35.645260  413977 main.go:141] libmachine: (ha-326651-m02)   <devices>
	I0731 18:35:35.645267  413977 main.go:141] libmachine: (ha-326651-m02)     <disk type='file' device='cdrom'>
	I0731 18:35:35.645277  413977 main.go:141] libmachine: (ha-326651-m02)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/boot2docker.iso'/>
	I0731 18:35:35.645289  413977 main.go:141] libmachine: (ha-326651-m02)       <target dev='hdc' bus='scsi'/>
	I0731 18:35:35.645297  413977 main.go:141] libmachine: (ha-326651-m02)       <readonly/>
	I0731 18:35:35.645302  413977 main.go:141] libmachine: (ha-326651-m02)     </disk>
	I0731 18:35:35.645311  413977 main.go:141] libmachine: (ha-326651-m02)     <disk type='file' device='disk'>
	I0731 18:35:35.645317  413977 main.go:141] libmachine: (ha-326651-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:35:35.645326  413977 main.go:141] libmachine: (ha-326651-m02)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/ha-326651-m02.rawdisk'/>
	I0731 18:35:35.645332  413977 main.go:141] libmachine: (ha-326651-m02)       <target dev='hda' bus='virtio'/>
	I0731 18:35:35.645339  413977 main.go:141] libmachine: (ha-326651-m02)     </disk>
	I0731 18:35:35.645344  413977 main.go:141] libmachine: (ha-326651-m02)     <interface type='network'>
	I0731 18:35:35.645351  413977 main.go:141] libmachine: (ha-326651-m02)       <source network='mk-ha-326651'/>
	I0731 18:35:35.645356  413977 main.go:141] libmachine: (ha-326651-m02)       <model type='virtio'/>
	I0731 18:35:35.645362  413977 main.go:141] libmachine: (ha-326651-m02)     </interface>
	I0731 18:35:35.645367  413977 main.go:141] libmachine: (ha-326651-m02)     <interface type='network'>
	I0731 18:35:35.645390  413977 main.go:141] libmachine: (ha-326651-m02)       <source network='default'/>
	I0731 18:35:35.645418  413977 main.go:141] libmachine: (ha-326651-m02)       <model type='virtio'/>
	I0731 18:35:35.645438  413977 main.go:141] libmachine: (ha-326651-m02)     </interface>
	I0731 18:35:35.645454  413977 main.go:141] libmachine: (ha-326651-m02)     <serial type='pty'>
	I0731 18:35:35.645467  413977 main.go:141] libmachine: (ha-326651-m02)       <target port='0'/>
	I0731 18:35:35.645474  413977 main.go:141] libmachine: (ha-326651-m02)     </serial>
	I0731 18:35:35.645486  413977 main.go:141] libmachine: (ha-326651-m02)     <console type='pty'>
	I0731 18:35:35.645496  413977 main.go:141] libmachine: (ha-326651-m02)       <target type='serial' port='0'/>
	I0731 18:35:35.645502  413977 main.go:141] libmachine: (ha-326651-m02)     </console>
	I0731 18:35:35.645511  413977 main.go:141] libmachine: (ha-326651-m02)     <rng model='virtio'>
	I0731 18:35:35.645541  413977 main.go:141] libmachine: (ha-326651-m02)       <backend model='random'>/dev/random</backend>
	I0731 18:35:35.645565  413977 main.go:141] libmachine: (ha-326651-m02)     </rng>
	I0731 18:35:35.645575  413977 main.go:141] libmachine: (ha-326651-m02)     
	I0731 18:35:35.645584  413977 main.go:141] libmachine: (ha-326651-m02)     
	I0731 18:35:35.645594  413977 main.go:141] libmachine: (ha-326651-m02)   </devices>
	I0731 18:35:35.645604  413977 main.go:141] libmachine: (ha-326651-m02) </domain>
	I0731 18:35:35.645614  413977 main.go:141] libmachine: (ha-326651-m02) 
	I0731 18:35:35.652329  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:98:43:24 in network default
	I0731 18:35:35.652898  413977 main.go:141] libmachine: (ha-326651-m02) Ensuring networks are active...
	I0731 18:35:35.652925  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:35.653591  413977 main.go:141] libmachine: (ha-326651-m02) Ensuring network default is active
	I0731 18:35:35.653867  413977 main.go:141] libmachine: (ha-326651-m02) Ensuring network mk-ha-326651 is active
	I0731 18:35:35.654354  413977 main.go:141] libmachine: (ha-326651-m02) Getting domain xml...
	I0731 18:35:35.655121  413977 main.go:141] libmachine: (ha-326651-m02) Creating domain...
	I0731 18:35:36.861124  413977 main.go:141] libmachine: (ha-326651-m02) Waiting to get IP...
	I0731 18:35:36.862084  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:36.862504  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:36.862566  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:36.862493  414340 retry.go:31] will retry after 199.826809ms: waiting for machine to come up
	I0731 18:35:37.064345  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:37.064927  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:37.064967  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:37.064860  414340 retry.go:31] will retry after 236.948402ms: waiting for machine to come up
	I0731 18:35:37.303612  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:37.304140  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:37.304168  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:37.304071  414340 retry.go:31] will retry after 402.03658ms: waiting for machine to come up
	I0731 18:35:37.707311  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:37.707733  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:37.707761  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:37.707695  414340 retry.go:31] will retry after 569.979997ms: waiting for machine to come up
	I0731 18:35:38.279602  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:38.280082  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:38.280114  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:38.280026  414340 retry.go:31] will retry after 586.366279ms: waiting for machine to come up
	I0731 18:35:38.867792  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:38.868371  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:38.868424  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:38.868260  414340 retry.go:31] will retry after 687.200514ms: waiting for machine to come up
	I0731 18:35:39.557177  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:39.557574  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:39.557602  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:39.557525  414340 retry.go:31] will retry after 1.024789258s: waiting for machine to come up
	I0731 18:35:40.584078  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:40.584531  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:40.584563  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:40.584464  414340 retry.go:31] will retry after 1.404649213s: waiting for machine to come up
	I0731 18:35:41.991082  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:41.991564  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:41.991590  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:41.991535  414340 retry.go:31] will retry after 1.367302302s: waiting for machine to come up
	I0731 18:35:43.361034  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:43.361505  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:43.361538  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:43.361449  414340 retry.go:31] will retry after 1.67771358s: waiting for machine to come up
	I0731 18:35:45.041027  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:45.041462  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:45.041486  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:45.041412  414340 retry.go:31] will retry after 2.147309485s: waiting for machine to come up
	I0731 18:35:47.190621  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:47.191055  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:47.191083  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:47.191003  414340 retry.go:31] will retry after 3.358926024s: waiting for machine to come up
	I0731 18:35:50.551544  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:50.552176  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:50.552204  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:50.552107  414340 retry.go:31] will retry after 3.792833111s: waiting for machine to come up
	I0731 18:35:54.349209  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:54.349784  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:54.349812  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:54.349732  414340 retry.go:31] will retry after 3.445591127s: waiting for machine to come up
	I0731 18:35:57.797811  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.798304  413977 main.go:141] libmachine: (ha-326651-m02) Found IP for machine: 192.168.39.202
	I0731 18:35:57.798328  413977 main.go:141] libmachine: (ha-326651-m02) Reserving static IP address...
	I0731 18:35:57.798341  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.798792  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find host DHCP lease matching {name: "ha-326651-m02", mac: "52:54:00:d7:a8:57", ip: "192.168.39.202"} in network mk-ha-326651
	I0731 18:35:57.872962  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Getting to WaitForSSH function...
	I0731 18:35:57.872999  413977 main.go:141] libmachine: (ha-326651-m02) Reserved static IP address: 192.168.39.202
	I0731 18:35:57.873013  413977 main.go:141] libmachine: (ha-326651-m02) Waiting for SSH to be available...
	I0731 18:35:57.875745  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.876129  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:57.876160  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.876305  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Using SSH client type: external
	I0731 18:35:57.876339  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa (-rw-------)
	I0731 18:35:57.876437  413977 main.go:141] libmachine: (ha-326651-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:35:57.876464  413977 main.go:141] libmachine: (ha-326651-m02) DBG | About to run SSH command:
	I0731 18:35:57.876477  413977 main.go:141] libmachine: (ha-326651-m02) DBG | exit 0
	I0731 18:35:58.004566  413977 main.go:141] libmachine: (ha-326651-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 18:35:58.004814  413977 main.go:141] libmachine: (ha-326651-m02) KVM machine creation complete!
	I0731 18:35:58.005200  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetConfigRaw
	I0731 18:35:58.005758  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:58.005947  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:58.006104  413977 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:35:58.006123  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:35:58.007450  413977 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:35:58.007465  413977 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:35:58.007471  413977 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:35:58.007477  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.009887  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.010310  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.010341  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.010446  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.010629  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.010786  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.010929  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.011127  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.011396  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.011415  413977 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:35:58.120065  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:58.120100  413977 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:35:58.120111  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.123179  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.123572  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.123602  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.123756  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.123987  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.124130  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.124325  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.124510  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.124739  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.124755  413977 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:35:58.233257  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:35:58.233341  413977 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:35:58.233359  413977 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:35:58.233373  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:58.233688  413977 buildroot.go:166] provisioning hostname "ha-326651-m02"
	I0731 18:35:58.233723  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:58.234009  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.236712  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.237040  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.237074  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.237243  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.237437  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.237601  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.237758  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.237947  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.238160  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.238172  413977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651-m02 && echo "ha-326651-m02" | sudo tee /etc/hostname
	I0731 18:35:58.366608  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651-m02
	
	I0731 18:35:58.366642  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.369425  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.369745  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.369786  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.369940  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.370170  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.370387  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.370564  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.370744  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.370963  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.370988  413977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:35:58.489910  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:58.489944  413977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:35:58.489960  413977 buildroot.go:174] setting up certificates
	I0731 18:35:58.489970  413977 provision.go:84] configureAuth start
	I0731 18:35:58.489978  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:58.490280  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:35:58.492850  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.493212  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.493238  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.493369  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.495952  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.496350  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.496385  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.496553  413977 provision.go:143] copyHostCerts
	I0731 18:35:58.496584  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:58.496622  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:35:58.496635  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:58.496708  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:35:58.496805  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:58.496830  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:35:58.496840  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:58.496887  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:35:58.496954  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:58.496980  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:35:58.496990  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:58.497024  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:35:58.497091  413977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651-m02 san=[127.0.0.1 192.168.39.202 ha-326651-m02 localhost minikube]
	I0731 18:35:58.731508  413977 provision.go:177] copyRemoteCerts
	I0731 18:35:58.731583  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:35:58.731619  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.734088  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.734437  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.734464  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.734630  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.734911  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.735109  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.735260  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:58.819261  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:35:58.819352  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:35:58.844920  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:35:58.845002  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 18:35:58.868993  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:35:58.869083  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:35:58.893720  413977 provision.go:87] duration metric: took 403.735131ms to configureAuth
	I0731 18:35:58.893748  413977 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:35:58.893955  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:58.894049  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.896796  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.897200  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.897231  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.897376  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.897584  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.897747  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.897905  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.898067  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.898232  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.898247  413977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:35:59.184923  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:35:59.184951  413977 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:35:59.184960  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetURL
	I0731 18:35:59.186313  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Using libvirt version 6000000
	I0731 18:35:59.188530  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.188801  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.188829  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.189018  413977 main.go:141] libmachine: Docker is up and running!
	I0731 18:35:59.189039  413977 main.go:141] libmachine: Reticulating splines...
	I0731 18:35:59.189047  413977 client.go:171] duration metric: took 24.003516515s to LocalClient.Create
	I0731 18:35:59.189072  413977 start.go:167] duration metric: took 24.003575545s to libmachine.API.Create "ha-326651"
	I0731 18:35:59.189085  413977 start.go:293] postStartSetup for "ha-326651-m02" (driver="kvm2")
	I0731 18:35:59.189102  413977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:35:59.189127  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.189397  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:35:59.189422  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:59.191929  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.192325  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.192356  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.192553  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.192777  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.192956  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.193139  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:59.280775  413977 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:35:59.285375  413977 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:35:59.285408  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:35:59.285476  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:35:59.285561  413977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:35:59.285572  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:35:59.285665  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:35:59.296957  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:35:59.322867  413977 start.go:296] duration metric: took 133.767315ms for postStartSetup
	I0731 18:35:59.322920  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetConfigRaw
	I0731 18:35:59.323524  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:35:59.326374  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.326710  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.326737  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.326972  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:59.327152  413977 start.go:128] duration metric: took 24.15947511s to createHost
	I0731 18:35:59.327176  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:59.329421  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.329811  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.329842  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.330004  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.330187  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.330355  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.330490  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.330677  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:59.330867  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:59.330880  413977 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:35:59.441112  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722450959.415688208
	
	I0731 18:35:59.441137  413977 fix.go:216] guest clock: 1722450959.415688208
	I0731 18:35:59.441147  413977 fix.go:229] Guest: 2024-07-31 18:35:59.415688208 +0000 UTC Remote: 2024-07-31 18:35:59.327163108 +0000 UTC m=+78.640467370 (delta=88.5251ms)
	I0731 18:35:59.441168  413977 fix.go:200] guest clock delta is within tolerance: 88.5251ms
	I0731 18:35:59.441175  413977 start.go:83] releasing machines lock for "ha-326651-m02", held for 24.273590624s
	I0731 18:35:59.441200  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.441487  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:35:59.444241  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.444718  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.444758  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.447114  413977 out.go:177] * Found network options:
	I0731 18:35:59.448560  413977 out.go:177]   - NO_PROXY=192.168.39.220
	W0731 18:35:59.449919  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:35:59.449954  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.450491  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.450707  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.450803  413977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:35:59.450851  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	W0731 18:35:59.450874  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:35:59.450964  413977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:35:59.450991  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:59.453542  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.453650  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.453893  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.453933  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.453958  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.453971  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.454091  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.454192  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.454280  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.454383  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.454440  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.454524  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.454592  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:59.454665  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:59.694205  413977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:35:59.700544  413977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:35:59.700620  413977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:35:59.717245  413977 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:35:59.717278  413977 start.go:495] detecting cgroup driver to use...
	I0731 18:35:59.717354  413977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:35:59.739360  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:35:59.758132  413977 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:35:59.758199  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:35:59.781092  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:35:59.800140  413977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:35:59.929257  413977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:36:00.079862  413977 docker.go:233] disabling docker service ...
	I0731 18:36:00.079950  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:36:00.094037  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:36:00.106860  413977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:36:00.246092  413977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:36:00.384050  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:36:00.398412  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:36:00.418759  413977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:36:00.418830  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.429240  413977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:36:00.429313  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.440139  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.450612  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.461331  413977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:36:00.472071  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.482078  413977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.499048  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.509449  413977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:36:00.518961  413977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:36:00.519037  413977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:36:00.532607  413977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:36:00.542051  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:36:00.658660  413977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:36:00.797574  413977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:36:00.797659  413977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:36:00.802331  413977 start.go:563] Will wait 60s for crictl version
	I0731 18:36:00.802395  413977 ssh_runner.go:195] Run: which crictl
	I0731 18:36:00.806274  413977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:36:00.846409  413977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:36:00.846496  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:36:00.876400  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:36:00.906370  413977 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:36:00.908199  413977 out.go:177]   - env NO_PROXY=192.168.39.220
	I0731 18:36:00.909625  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:36:00.912094  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:36:00.912420  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:36:00.912442  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:36:00.912633  413977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:36:00.916728  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:36:00.930599  413977 mustload.go:65] Loading cluster: ha-326651
	I0731 18:36:00.930859  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:36:00.931240  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:00.931282  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:00.946413  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I0731 18:36:00.946933  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:00.947482  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:00.947508  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:00.947828  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:00.948025  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:36:00.949643  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:36:00.950006  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:00.950032  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:00.965008  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0731 18:36:00.965487  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:00.966001  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:00.966030  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:00.966343  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:00.966528  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:36:00.966740  413977 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.202
	I0731 18:36:00.966751  413977 certs.go:194] generating shared ca certs ...
	I0731 18:36:00.966767  413977 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:36:00.966890  413977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:36:00.966927  413977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:36:00.966937  413977 certs.go:256] generating profile certs ...
	I0731 18:36:00.967008  413977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:36:00.967033  413977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c
	I0731 18:36:00.967054  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.202 192.168.39.254]
	I0731 18:36:01.112495  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c ...
	I0731 18:36:01.112531  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c: {Name:mk8ceeb615d268d5b0f00c91b069a1a3723f2c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:36:01.112733  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c ...
	I0731 18:36:01.112754  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c: {Name:mk00478113f238cc7eec245068b06cb5f757c59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:36:01.112857  413977 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:36:01.113024  413977 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:36:01.113201  413977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:36:01.113219  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:36:01.113238  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:36:01.113253  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:36:01.113273  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:36:01.113289  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:36:01.113307  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:36:01.113325  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:36:01.113340  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:36:01.113413  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:36:01.113456  413977 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:36:01.113472  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:36:01.113504  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:36:01.113536  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:36:01.113568  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:36:01.113626  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:36:01.113664  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.113684  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.113703  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.113767  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:36:01.116870  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:01.117252  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:36:01.117275  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:01.117512  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:36:01.117750  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:36:01.117916  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:36:01.118056  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:36:01.196792  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 18:36:01.202328  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 18:36:01.213577  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 18:36:01.218105  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 18:36:01.229389  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 18:36:01.234401  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 18:36:01.246446  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 18:36:01.250856  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 18:36:01.262748  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 18:36:01.267259  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 18:36:01.278635  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 18:36:01.283174  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 18:36:01.294090  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:36:01.319298  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:36:01.342683  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:36:01.367373  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:36:01.391756  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 18:36:01.415690  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:36:01.438350  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:36:01.463256  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:36:01.487181  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:36:01.513206  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:36:01.537436  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:36:01.561619  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 18:36:01.579854  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 18:36:01.596492  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 18:36:01.613001  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 18:36:01.629229  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 18:36:01.646433  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 18:36:01.664055  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 18:36:01.681455  413977 ssh_runner.go:195] Run: openssl version
	I0731 18:36:01.687543  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:36:01.698742  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.703087  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.703156  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.708912  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:36:01.720079  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:36:01.731172  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.735518  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.735578  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.741589  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:36:01.752893  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:36:01.764993  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.770017  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.770088  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.776371  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:36:01.788522  413977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:36:01.792612  413977 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:36:01.792669  413977 kubeadm.go:934] updating node {m02 192.168.39.202 8443 v1.30.3 crio true true} ...
	I0731 18:36:01.792767  413977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:36:01.792797  413977 kube-vip.go:115] generating kube-vip config ...
	I0731 18:36:01.792841  413977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:36:01.810740  413977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:36:01.810808  413977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:36:01.810870  413977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:36:01.821431  413977 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 18:36:01.821493  413977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 18:36:01.832107  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 18:36:01.832141  413977 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 18:36:01.832142  413977 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 18:36:01.832149  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:36:01.832340  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:36:01.837253  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 18:36:01.837285  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 18:36:03.944533  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:36:03.960686  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:36:03.960798  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:36:03.965639  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 18:36:03.965681  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 18:36:09.311238  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:36:09.311325  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:36:09.316312  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 18:36:09.316363  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 18:36:09.571272  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 18:36:09.581469  413977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 18:36:09.598804  413977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:36:09.616310  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 18:36:09.633298  413977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:36:09.637615  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:36:09.650955  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:36:09.786501  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:36:09.808147  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:36:09.808597  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:09.808644  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:09.824421  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0731 18:36:09.824979  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:09.825520  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:09.825545  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:09.825893  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:09.826077  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:36:09.826224  413977 start.go:317] joinCluster: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:36:09.826321  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 18:36:09.826338  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:36:09.829547  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:09.830199  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:36:09.830222  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:09.830427  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:36:09.830693  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:36:09.830848  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:36:09.831020  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:36:10.001149  413977 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:36:10.001192  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cb1zae.ffq2me10k33ld2gl --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0731 18:36:31.947814  413977 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cb1zae.ffq2me10k33ld2gl --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (21.946589058s)
	I0731 18:36:31.947859  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 18:36:32.512036  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326651-m02 minikube.k8s.io/updated_at=2024_07_31T18_36_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=ha-326651 minikube.k8s.io/primary=false
	I0731 18:36:32.648147  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326651-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 18:36:32.811459  413977 start.go:319] duration metric: took 22.985226172s to joinCluster
	I0731 18:36:32.811551  413977 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:36:32.811905  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:36:32.812972  413977 out.go:177] * Verifying Kubernetes components...
	I0731 18:36:32.814591  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:36:33.044219  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:36:33.117979  413977 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:36:33.118326  413977 kapi.go:59] client config for ha-326651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 18:36:33.118396  413977 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I0731 18:36:33.118595  413977 node_ready.go:35] waiting up to 6m0s for node "ha-326651-m02" to be "Ready" ...
	I0731 18:36:33.118684  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:33.118692  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:33.118700  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:33.118705  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:33.135614  413977 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 18:36:33.619795  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:33.619826  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:33.619837  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:33.619844  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:33.641399  413977 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0731 18:36:34.119252  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:34.119276  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:34.119286  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:34.119293  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:34.123528  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:34.619820  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:34.619853  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:34.619864  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:34.619879  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:34.627714  413977 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 18:36:35.118866  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:35.118893  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:35.118905  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:35.118909  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:35.122701  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:35.123182  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:35.619811  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:35.619833  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:35.619842  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:35.619846  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:35.623151  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:36.119732  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:36.119753  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:36.119762  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:36.119766  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:36.123530  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:36.619092  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:36.619125  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:36.619136  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:36.619140  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:36.622490  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:37.119642  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:37.119665  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:37.119673  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:37.119677  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:37.122991  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:37.123835  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:37.619251  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:37.619283  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:37.619292  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:37.619296  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:37.623460  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:38.119708  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:38.119736  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:38.119745  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:38.119749  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:38.123105  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:38.619250  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:38.619275  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:38.619284  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:38.619288  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:38.622772  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:39.118886  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:39.118911  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:39.118920  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:39.118924  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:39.122321  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:39.619186  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:39.619210  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:39.619219  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:39.619222  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:39.622507  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:39.623126  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:40.119023  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:40.119051  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:40.119064  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:40.119069  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:40.122756  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:40.619118  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:40.619146  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:40.619155  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:40.619160  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:40.622901  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:41.118817  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:41.118841  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:41.118851  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:41.118855  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:41.121963  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:41.619709  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:41.619734  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:41.619742  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:41.619747  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:41.623720  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:41.624433  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:42.119582  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:42.119612  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:42.119622  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:42.119628  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:42.122500  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:42.619202  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:42.619235  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:42.619248  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:42.619253  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:42.622947  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:43.119214  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:43.119237  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:43.119246  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:43.119249  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:43.124276  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:36:43.619483  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:43.619508  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:43.619517  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:43.619521  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:43.623921  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:44.119432  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:44.119458  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:44.119466  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:44.119470  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:44.123304  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:44.124032  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:44.619529  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:44.619553  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:44.619562  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:44.619567  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:44.623067  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:45.119620  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:45.119646  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:45.119654  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:45.119657  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:45.123207  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:45.618912  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:45.618938  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:45.618947  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:45.618951  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:45.622718  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:46.119263  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:46.119298  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:46.119308  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:46.119313  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:46.123083  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:46.619066  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:46.619091  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:46.619100  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:46.619104  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:46.622201  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:46.623139  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:47.119393  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:47.119424  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:47.119433  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:47.119439  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:47.123007  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:47.618994  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:47.619017  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:47.619026  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:47.619030  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:47.622368  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:48.119257  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:48.119279  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:48.119288  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:48.119293  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:48.123310  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:48.619282  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:48.619309  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:48.619318  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:48.619322  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:48.622987  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:48.623591  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:49.118960  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:49.118988  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:49.118998  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:49.119003  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:49.122405  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:49.619280  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:49.619305  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:49.619312  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:49.619317  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:49.622791  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.119104  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:50.119129  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.119138  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.119142  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.122870  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.619224  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:50.619247  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.619255  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.619258  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.622725  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.623311  413977 node_ready.go:49] node "ha-326651-m02" has status "Ready":"True"
	I0731 18:36:50.623351  413977 node_ready.go:38] duration metric: took 17.504731047s for node "ha-326651-m02" to be "Ready" ...
	I0731 18:36:50.623363  413977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:36:50.623483  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:50.623496  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.623507  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.623517  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.628686  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:36:50.635769  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.635863  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hsr7k
	I0731 18:36:50.635871  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.635879  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.635886  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.639019  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.639721  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:50.639741  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.639752  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.639759  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.642992  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.643579  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.643601  413977 pod_ready.go:81] duration metric: took 7.805024ms for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.643611  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.643669  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p2tfn
	I0731 18:36:50.643676  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.643683  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.643688  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.647444  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.648594  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:50.648607  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.648615  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.648621  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.651381  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:50.652052  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.652081  413977 pod_ready.go:81] duration metric: took 8.461392ms for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.652094  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.652183  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651
	I0731 18:36:50.652195  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.652203  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.652207  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.655850  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.656990  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:50.657006  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.657014  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.657019  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.659586  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:50.660121  413977 pod_ready.go:92] pod "etcd-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.660142  413977 pod_ready.go:81] duration metric: took 8.037093ms for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.660158  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.660218  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m02
	I0731 18:36:50.660226  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.660233  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.660237  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.663068  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:50.663711  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:50.663728  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.663736  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.663739  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.666777  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.667644  413977 pod_ready.go:92] pod "etcd-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.667666  413977 pod_ready.go:81] duration metric: took 7.501047ms for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.667684  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.820032  413977 request.go:629] Waited for 152.267535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:36:50.820136  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:36:50.820146  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.820156  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.820170  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.823487  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:51.019288  413977 request.go:629] Waited for 195.11581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.019343  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.019349  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.019359  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.019365  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.021971  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:51.022498  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:51.022513  413977 pod_ready.go:81] duration metric: took 354.821451ms for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.022523  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.219661  413977 request.go:629] Waited for 197.049789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:36:51.219725  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:36:51.219730  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.219737  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.219742  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.235111  413977 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 18:36:51.419933  413977 request.go:629] Waited for 183.368029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:51.419996  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:51.420001  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.420009  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.420013  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.423405  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:51.423972  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:51.423990  413977 pod_ready.go:81] duration metric: took 401.460167ms for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.424000  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.620072  413977 request.go:629] Waited for 195.990243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:36:51.620141  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:36:51.620146  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.620154  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.620158  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.623549  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:51.819910  413977 request.go:629] Waited for 195.384302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.819977  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.819983  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.819994  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.819999  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.822872  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:51.823601  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:51.823624  413977 pod_ready.go:81] duration metric: took 399.617251ms for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.823638  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.019634  413977 request.go:629] Waited for 195.919417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:36:52.019705  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:36:52.019710  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.019719  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.019724  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.023947  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:52.220260  413977 request.go:629] Waited for 195.397684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:52.220320  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:52.220325  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.220332  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.220336  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.223732  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:52.224369  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:52.224409  413977 pod_ready.go:81] duration metric: took 400.763328ms for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.224422  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.419481  413977 request.go:629] Waited for 194.973874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:36:52.419570  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:36:52.419577  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.419585  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.419589  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.423022  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:52.620098  413977 request.go:629] Waited for 196.372792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:52.620260  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:52.620275  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.620286  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.620295  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.623563  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:52.624517  413977 pod_ready.go:92] pod "kube-proxy-hg6sj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:52.624546  413977 pod_ready.go:81] duration metric: took 400.111099ms for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.624562  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.819687  413977 request.go:629] Waited for 195.030621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:36:52.819748  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:36:52.819754  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.819762  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.819765  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.823011  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.020234  413977 request.go:629] Waited for 196.391359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.020315  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.020322  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.020334  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.020344  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.023335  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:53.023942  413977 pod_ready.go:92] pod "kube-proxy-stqb2" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:53.023963  413977 pod_ready.go:81] duration metric: took 399.393046ms for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.023975  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.220035  413977 request.go:629] Waited for 195.975129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:36:53.220116  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:36:53.220131  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.220139  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.220146  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.224063  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.420226  413977 request.go:629] Waited for 195.373634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:53.420283  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:53.420289  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.420297  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.420302  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.423579  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.424326  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:53.424348  413977 pod_ready.go:81] duration metric: took 400.362187ms for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.424357  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.619262  413977 request.go:629] Waited for 194.802916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:36:53.619362  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:36:53.619369  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.619387  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.619398  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.623028  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.820000  413977 request.go:629] Waited for 196.367475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.820090  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.820096  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.820104  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.820108  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.823825  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.824352  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:53.824385  413977 pod_ready.go:81] duration metric: took 400.009008ms for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.824401  413977 pod_ready.go:38] duration metric: took 3.200992959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:36:53.824433  413977 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:36:53.824502  413977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:36:53.840295  413977 api_server.go:72] duration metric: took 21.028699297s to wait for apiserver process to appear ...
	I0731 18:36:53.840323  413977 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:36:53.840346  413977 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I0731 18:36:53.846270  413977 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I0731 18:36:53.846362  413977 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I0731 18:36:53.846378  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.846390  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.846401  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.847375  413977 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 18:36:53.847469  413977 api_server.go:141] control plane version: v1.30.3
	I0731 18:36:53.847486  413977 api_server.go:131] duration metric: took 7.156659ms to wait for apiserver health ...
	I0731 18:36:53.847493  413977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:36:54.019755  413977 request.go:629] Waited for 172.188734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.019895  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.019922  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.019934  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.019941  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.024788  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:54.029445  413977 system_pods.go:59] 17 kube-system pods found
	I0731 18:36:54.029486  413977 system_pods.go:61] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:36:54.029492  413977 system_pods.go:61] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:36:54.029496  413977 system_pods.go:61] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:36:54.029499  413977 system_pods.go:61] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:36:54.029502  413977 system_pods.go:61] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:36:54.029505  413977 system_pods.go:61] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:36:54.029508  413977 system_pods.go:61] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:36:54.029511  413977 system_pods.go:61] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:36:54.029515  413977 system_pods.go:61] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:36:54.029519  413977 system_pods.go:61] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:36:54.029524  413977 system_pods.go:61] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:36:54.029527  413977 system_pods.go:61] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:36:54.029530  413977 system_pods.go:61] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:36:54.029533  413977 system_pods.go:61] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:36:54.029536  413977 system_pods.go:61] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:36:54.029539  413977 system_pods.go:61] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:36:54.029542  413977 system_pods.go:61] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:36:54.029549  413977 system_pods.go:74] duration metric: took 182.050143ms to wait for pod list to return data ...
	I0731 18:36:54.029561  413977 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:36:54.219768  413977 request.go:629] Waited for 190.124299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:36:54.219879  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:36:54.219891  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.219903  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.219912  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.222946  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:54.223161  413977 default_sa.go:45] found service account: "default"
	I0731 18:36:54.223176  413977 default_sa.go:55] duration metric: took 193.609173ms for default service account to be created ...
	I0731 18:36:54.223184  413977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:36:54.419651  413977 request.go:629] Waited for 196.386127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.419720  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.419725  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.419733  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.419736  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.424775  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:36:54.430823  413977 system_pods.go:86] 17 kube-system pods found
	I0731 18:36:54.430854  413977 system_pods.go:89] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:36:54.430864  413977 system_pods.go:89] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:36:54.430870  413977 system_pods.go:89] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:36:54.430875  413977 system_pods.go:89] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:36:54.430882  413977 system_pods.go:89] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:36:54.430889  413977 system_pods.go:89] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:36:54.430895  413977 system_pods.go:89] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:36:54.430902  413977 system_pods.go:89] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:36:54.430912  413977 system_pods.go:89] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:36:54.430919  413977 system_pods.go:89] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:36:54.430930  413977 system_pods.go:89] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:36:54.430937  413977 system_pods.go:89] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:36:54.430944  413977 system_pods.go:89] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:36:54.430953  413977 system_pods.go:89] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:36:54.430961  413977 system_pods.go:89] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:36:54.430969  413977 system_pods.go:89] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:36:54.430975  413977 system_pods.go:89] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:36:54.430988  413977 system_pods.go:126] duration metric: took 207.796783ms to wait for k8s-apps to be running ...
	I0731 18:36:54.431001  413977 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:36:54.431058  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:36:54.446884  413977 system_svc.go:56] duration metric: took 15.869691ms WaitForService to wait for kubelet
	I0731 18:36:54.446917  413977 kubeadm.go:582] duration metric: took 21.635330045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:36:54.446939  413977 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:36:54.619227  413977 request.go:629] Waited for 172.209982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I0731 18:36:54.619294  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I0731 18:36:54.619300  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.619308  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.619313  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.622925  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:54.623751  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:36:54.623791  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:36:54.623816  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:36:54.623820  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:36:54.623826  413977 node_conditions.go:105] duration metric: took 176.882629ms to run NodePressure ...
	I0731 18:36:54.623838  413977 start.go:241] waiting for startup goroutines ...
	I0731 18:36:54.623868  413977 start.go:255] writing updated cluster config ...
	I0731 18:36:54.626219  413977 out.go:177] 
	I0731 18:36:54.628763  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:36:54.628859  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:36:54.630660  413977 out.go:177] * Starting "ha-326651-m03" control-plane node in "ha-326651" cluster
	I0731 18:36:54.632068  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:36:54.632100  413977 cache.go:56] Caching tarball of preloaded images
	I0731 18:36:54.632225  413977 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:36:54.632240  413977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:36:54.632350  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:36:54.632563  413977 start.go:360] acquireMachinesLock for ha-326651-m03: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:36:54.632610  413977 start.go:364] duration metric: took 26.59µs to acquireMachinesLock for "ha-326651-m03"
	I0731 18:36:54.632626  413977 start.go:93] Provisioning new machine with config: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:36:54.632717  413977 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 18:36:54.634360  413977 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:36:54.634443  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:54.634479  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:54.649865  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0731 18:36:54.650366  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:54.650792  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:54.650814  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:54.651168  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:54.651420  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:36:54.651573  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:36:54.651738  413977 start.go:159] libmachine.API.Create for "ha-326651" (driver="kvm2")
	I0731 18:36:54.651774  413977 client.go:168] LocalClient.Create starting
	I0731 18:36:54.651806  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:36:54.651838  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:36:54.651856  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:36:54.651908  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:36:54.651928  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:36:54.651939  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:36:54.651958  413977 main.go:141] libmachine: Running pre-create checks...
	I0731 18:36:54.651966  413977 main.go:141] libmachine: (ha-326651-m03) Calling .PreCreateCheck
	I0731 18:36:54.652128  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetConfigRaw
	I0731 18:36:54.652579  413977 main.go:141] libmachine: Creating machine...
	I0731 18:36:54.652596  413977 main.go:141] libmachine: (ha-326651-m03) Calling .Create
	I0731 18:36:54.652732  413977 main.go:141] libmachine: (ha-326651-m03) Creating KVM machine...
	I0731 18:36:54.653878  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found existing default KVM network
	I0731 18:36:54.654014  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found existing private KVM network mk-ha-326651
	I0731 18:36:54.654182  413977 main.go:141] libmachine: (ha-326651-m03) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03 ...
	I0731 18:36:54.654219  413977 main.go:141] libmachine: (ha-326651-m03) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:36:54.654321  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:54.654194  414751 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:36:54.654415  413977 main.go:141] libmachine: (ha-326651-m03) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:36:54.925445  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:54.925298  414751 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa...
	I0731 18:36:55.032632  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:55.032498  414751 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/ha-326651-m03.rawdisk...
	I0731 18:36:55.032662  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Writing magic tar header
	I0731 18:36:55.032677  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Writing SSH key tar header
	I0731 18:36:55.032691  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:55.032610  414751 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03 ...
	I0731 18:36:55.032713  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03
	I0731 18:36:55.032827  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:36:55.032857  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03 (perms=drwx------)
	I0731 18:36:55.032868  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:36:55.032900  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:36:55.032933  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:36:55.032945  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:36:55.032963  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:36:55.032972  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:36:55.033009  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:36:55.033032  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:36:55.033044  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home
	I0731 18:36:55.033059  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Skipping /home - not owner
	I0731 18:36:55.033075  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:36:55.033090  413977 main.go:141] libmachine: (ha-326651-m03) Creating domain...
	I0731 18:36:55.033848  413977 main.go:141] libmachine: (ha-326651-m03) define libvirt domain using xml: 
	I0731 18:36:55.033877  413977 main.go:141] libmachine: (ha-326651-m03) <domain type='kvm'>
	I0731 18:36:55.033889  413977 main.go:141] libmachine: (ha-326651-m03)   <name>ha-326651-m03</name>
	I0731 18:36:55.033896  413977 main.go:141] libmachine: (ha-326651-m03)   <memory unit='MiB'>2200</memory>
	I0731 18:36:55.033910  413977 main.go:141] libmachine: (ha-326651-m03)   <vcpu>2</vcpu>
	I0731 18:36:55.033920  413977 main.go:141] libmachine: (ha-326651-m03)   <features>
	I0731 18:36:55.033930  413977 main.go:141] libmachine: (ha-326651-m03)     <acpi/>
	I0731 18:36:55.033940  413977 main.go:141] libmachine: (ha-326651-m03)     <apic/>
	I0731 18:36:55.033951  413977 main.go:141] libmachine: (ha-326651-m03)     <pae/>
	I0731 18:36:55.033957  413977 main.go:141] libmachine: (ha-326651-m03)     
	I0731 18:36:55.033967  413977 main.go:141] libmachine: (ha-326651-m03)   </features>
	I0731 18:36:55.033978  413977 main.go:141] libmachine: (ha-326651-m03)   <cpu mode='host-passthrough'>
	I0731 18:36:55.033988  413977 main.go:141] libmachine: (ha-326651-m03)   
	I0731 18:36:55.033998  413977 main.go:141] libmachine: (ha-326651-m03)   </cpu>
	I0731 18:36:55.034009  413977 main.go:141] libmachine: (ha-326651-m03)   <os>
	I0731 18:36:55.034019  413977 main.go:141] libmachine: (ha-326651-m03)     <type>hvm</type>
	I0731 18:36:55.034030  413977 main.go:141] libmachine: (ha-326651-m03)     <boot dev='cdrom'/>
	I0731 18:36:55.034041  413977 main.go:141] libmachine: (ha-326651-m03)     <boot dev='hd'/>
	I0731 18:36:55.034056  413977 main.go:141] libmachine: (ha-326651-m03)     <bootmenu enable='no'/>
	I0731 18:36:55.034069  413977 main.go:141] libmachine: (ha-326651-m03)   </os>
	I0731 18:36:55.034080  413977 main.go:141] libmachine: (ha-326651-m03)   <devices>
	I0731 18:36:55.034091  413977 main.go:141] libmachine: (ha-326651-m03)     <disk type='file' device='cdrom'>
	I0731 18:36:55.034104  413977 main.go:141] libmachine: (ha-326651-m03)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/boot2docker.iso'/>
	I0731 18:36:55.034116  413977 main.go:141] libmachine: (ha-326651-m03)       <target dev='hdc' bus='scsi'/>
	I0731 18:36:55.034127  413977 main.go:141] libmachine: (ha-326651-m03)       <readonly/>
	I0731 18:36:55.034136  413977 main.go:141] libmachine: (ha-326651-m03)     </disk>
	I0731 18:36:55.034169  413977 main.go:141] libmachine: (ha-326651-m03)     <disk type='file' device='disk'>
	I0731 18:36:55.034192  413977 main.go:141] libmachine: (ha-326651-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:36:55.034207  413977 main.go:141] libmachine: (ha-326651-m03)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/ha-326651-m03.rawdisk'/>
	I0731 18:36:55.034219  413977 main.go:141] libmachine: (ha-326651-m03)       <target dev='hda' bus='virtio'/>
	I0731 18:36:55.034233  413977 main.go:141] libmachine: (ha-326651-m03)     </disk>
	I0731 18:36:55.034244  413977 main.go:141] libmachine: (ha-326651-m03)     <interface type='network'>
	I0731 18:36:55.034255  413977 main.go:141] libmachine: (ha-326651-m03)       <source network='mk-ha-326651'/>
	I0731 18:36:55.034271  413977 main.go:141] libmachine: (ha-326651-m03)       <model type='virtio'/>
	I0731 18:36:55.034282  413977 main.go:141] libmachine: (ha-326651-m03)     </interface>
	I0731 18:36:55.034292  413977 main.go:141] libmachine: (ha-326651-m03)     <interface type='network'>
	I0731 18:36:55.034309  413977 main.go:141] libmachine: (ha-326651-m03)       <source network='default'/>
	I0731 18:36:55.034320  413977 main.go:141] libmachine: (ha-326651-m03)       <model type='virtio'/>
	I0731 18:36:55.034330  413977 main.go:141] libmachine: (ha-326651-m03)     </interface>
	I0731 18:36:55.034340  413977 main.go:141] libmachine: (ha-326651-m03)     <serial type='pty'>
	I0731 18:36:55.034367  413977 main.go:141] libmachine: (ha-326651-m03)       <target port='0'/>
	I0731 18:36:55.034390  413977 main.go:141] libmachine: (ha-326651-m03)     </serial>
	I0731 18:36:55.034403  413977 main.go:141] libmachine: (ha-326651-m03)     <console type='pty'>
	I0731 18:36:55.034419  413977 main.go:141] libmachine: (ha-326651-m03)       <target type='serial' port='0'/>
	I0731 18:36:55.034431  413977 main.go:141] libmachine: (ha-326651-m03)     </console>
	I0731 18:36:55.034441  413977 main.go:141] libmachine: (ha-326651-m03)     <rng model='virtio'>
	I0731 18:36:55.034452  413977 main.go:141] libmachine: (ha-326651-m03)       <backend model='random'>/dev/random</backend>
	I0731 18:36:55.034459  413977 main.go:141] libmachine: (ha-326651-m03)     </rng>
	I0731 18:36:55.034466  413977 main.go:141] libmachine: (ha-326651-m03)     
	I0731 18:36:55.034475  413977 main.go:141] libmachine: (ha-326651-m03)     
	I0731 18:36:55.034485  413977 main.go:141] libmachine: (ha-326651-m03)   </devices>
	I0731 18:36:55.034498  413977 main.go:141] libmachine: (ha-326651-m03) </domain>
	I0731 18:36:55.034512  413977 main.go:141] libmachine: (ha-326651-m03) 
	I0731 18:36:55.041422  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:49:47:41 in network default
	I0731 18:36:55.041954  413977 main.go:141] libmachine: (ha-326651-m03) Ensuring networks are active...
	I0731 18:36:55.041977  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:55.042594  413977 main.go:141] libmachine: (ha-326651-m03) Ensuring network default is active
	I0731 18:36:55.042817  413977 main.go:141] libmachine: (ha-326651-m03) Ensuring network mk-ha-326651 is active
	I0731 18:36:55.043176  413977 main.go:141] libmachine: (ha-326651-m03) Getting domain xml...
	I0731 18:36:55.043920  413977 main.go:141] libmachine: (ha-326651-m03) Creating domain...
	I0731 18:36:56.284446  413977 main.go:141] libmachine: (ha-326651-m03) Waiting to get IP...
	I0731 18:36:56.285331  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:56.285792  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:56.285843  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:56.285788  414751 retry.go:31] will retry after 304.751946ms: waiting for machine to come up
	I0731 18:36:56.592337  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:56.592775  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:56.592803  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:56.592717  414751 retry.go:31] will retry after 340.274018ms: waiting for machine to come up
	I0731 18:36:56.934275  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:56.934639  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:56.934664  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:56.934590  414751 retry.go:31] will retry after 480.912288ms: waiting for machine to come up
	I0731 18:36:57.417185  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:57.417546  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:57.417569  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:57.417515  414751 retry.go:31] will retry after 559.822127ms: waiting for machine to come up
	I0731 18:36:57.978965  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:57.979412  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:57.979445  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:57.979342  414751 retry.go:31] will retry after 661.136496ms: waiting for machine to come up
	I0731 18:36:58.641741  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:58.642127  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:58.642145  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:58.642067  414751 retry.go:31] will retry after 868.945905ms: waiting for machine to come up
	I0731 18:36:59.512206  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:59.512689  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:59.512728  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:59.512626  414751 retry.go:31] will retry after 989.429958ms: waiting for machine to come up
	I0731 18:37:00.504321  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:00.504690  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:00.504722  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:00.504638  414751 retry.go:31] will retry after 1.406836695s: waiting for machine to come up
	I0731 18:37:01.912991  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:01.913456  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:01.913484  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:01.913423  414751 retry.go:31] will retry after 1.15357756s: waiting for machine to come up
	I0731 18:37:03.068203  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:03.068692  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:03.068733  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:03.068647  414751 retry.go:31] will retry after 1.659498365s: waiting for machine to come up
	I0731 18:37:04.729694  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:04.730087  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:04.730118  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:04.730024  414751 retry.go:31] will retry after 1.779116686s: waiting for machine to come up
	I0731 18:37:06.511383  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:06.511853  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:06.511884  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:06.511794  414751 retry.go:31] will retry after 3.278316837s: waiting for machine to come up
	I0731 18:37:09.792484  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:09.792916  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:09.792940  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:09.792886  414751 retry.go:31] will retry after 3.596881471s: waiting for machine to come up
	I0731 18:37:13.393517  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:13.393946  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:13.393970  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:13.393891  414751 retry.go:31] will retry after 3.454646204s: waiting for machine to come up
	I0731 18:37:16.850516  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:16.851033  413977 main.go:141] libmachine: (ha-326651-m03) Found IP for machine: 192.168.39.50
	I0731 18:37:16.851057  413977 main.go:141] libmachine: (ha-326651-m03) Reserving static IP address...
	I0731 18:37:16.851070  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has current primary IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:16.852215  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find host DHCP lease matching {name: "ha-326651-m03", mac: "52:54:00:4a:ff:37", ip: "192.168.39.50"} in network mk-ha-326651
	I0731 18:37:16.927588  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Getting to WaitForSSH function...
	I0731 18:37:16.927621  413977 main.go:141] libmachine: (ha-326651-m03) Reserved static IP address: 192.168.39.50
	I0731 18:37:16.927635  413977 main.go:141] libmachine: (ha-326651-m03) Waiting for SSH to be available...
	I0731 18:37:16.930121  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:16.930521  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651
	I0731 18:37:16.930551  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find defined IP address of network mk-ha-326651 interface with MAC address 52:54:00:4a:ff:37
	I0731 18:37:16.930736  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH client type: external
	I0731 18:37:16.930762  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa (-rw-------)
	I0731 18:37:16.930790  413977 main.go:141] libmachine: (ha-326651-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:37:16.930812  413977 main.go:141] libmachine: (ha-326651-m03) DBG | About to run SSH command:
	I0731 18:37:16.930823  413977 main.go:141] libmachine: (ha-326651-m03) DBG | exit 0
	I0731 18:37:16.934884  413977 main.go:141] libmachine: (ha-326651-m03) DBG | SSH cmd err, output: exit status 255: 
	I0731 18:37:16.934913  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 18:37:16.934924  413977 main.go:141] libmachine: (ha-326651-m03) DBG | command : exit 0
	I0731 18:37:16.934932  413977 main.go:141] libmachine: (ha-326651-m03) DBG | err     : exit status 255
	I0731 18:37:16.934957  413977 main.go:141] libmachine: (ha-326651-m03) DBG | output  : 
	I0731 18:37:19.935112  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Getting to WaitForSSH function...
	I0731 18:37:19.937438  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:19.937884  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:19.937918  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:19.938123  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH client type: external
	I0731 18:37:19.938150  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa (-rw-------)
	I0731 18:37:19.938184  413977 main.go:141] libmachine: (ha-326651-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:37:19.938203  413977 main.go:141] libmachine: (ha-326651-m03) DBG | About to run SSH command:
	I0731 18:37:19.938220  413977 main.go:141] libmachine: (ha-326651-m03) DBG | exit 0
	I0731 18:37:20.060867  413977 main.go:141] libmachine: (ha-326651-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 18:37:20.061148  413977 main.go:141] libmachine: (ha-326651-m03) KVM machine creation complete!
	I0731 18:37:20.061490  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetConfigRaw
	I0731 18:37:20.062097  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:20.062281  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:20.062461  413977 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:37:20.062480  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:37:20.063844  413977 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:37:20.063861  413977 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:37:20.063866  413977 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:37:20.063873  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.066216  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.066575  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.066593  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.066831  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.067010  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.067189  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.067345  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.067523  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.067813  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.067828  413977 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:37:20.172103  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:37:20.172143  413977 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:37:20.172159  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.175645  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.176045  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.176076  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.176257  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.176527  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.176744  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.176895  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.177073  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.177292  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.177309  413977 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:37:20.281327  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:37:20.281404  413977 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:37:20.281415  413977 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:37:20.281427  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:37:20.281717  413977 buildroot.go:166] provisioning hostname "ha-326651-m03"
	I0731 18:37:20.281747  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:37:20.281963  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.284626  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.285058  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.285091  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.285175  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.285384  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.285581  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.285736  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.285927  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.286222  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.286244  413977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651-m03 && echo "ha-326651-m03" | sudo tee /etc/hostname
	I0731 18:37:20.403177  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651-m03
	
	I0731 18:37:20.403211  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.406056  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.406423  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.406453  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.406612  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.406798  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.406998  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.407102  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.407270  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.407437  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.407453  413977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:37:20.519447  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:37:20.519482  413977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:37:20.519499  413977 buildroot.go:174] setting up certificates
	I0731 18:37:20.519508  413977 provision.go:84] configureAuth start
	I0731 18:37:20.519517  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:37:20.519800  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:20.522557  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.522949  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.522976  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.523172  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.525648  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.525963  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.525999  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.526124  413977 provision.go:143] copyHostCerts
	I0731 18:37:20.526157  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:37:20.526191  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:37:20.526200  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:37:20.526261  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:37:20.526341  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:37:20.526359  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:37:20.526365  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:37:20.526388  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:37:20.526435  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:37:20.526451  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:37:20.526457  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:37:20.526476  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:37:20.526524  413977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651-m03 san=[127.0.0.1 192.168.39.50 ha-326651-m03 localhost minikube]
	I0731 18:37:20.769988  413977 provision.go:177] copyRemoteCerts
	I0731 18:37:20.770051  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:37:20.770076  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.772989  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.773274  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.773304  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.773456  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.773676  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.773824  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.773976  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:20.856809  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:37:20.856890  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:37:20.882984  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:37:20.883068  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 18:37:20.909134  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:37:20.909222  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:37:20.933028  413977 provision.go:87] duration metric: took 413.504588ms to configureAuth
	I0731 18:37:20.933064  413977 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:37:20.933298  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:37:20.933377  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.936045  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.936362  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.936424  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.936608  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.936855  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.937035  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.937221  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.937398  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.937615  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.937634  413977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:37:21.200546  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:37:21.200577  413977 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:37:21.200587  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetURL
	I0731 18:37:21.201925  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using libvirt version 6000000
	I0731 18:37:21.204087  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.204537  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.204558  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.204717  413977 main.go:141] libmachine: Docker is up and running!
	I0731 18:37:21.204732  413977 main.go:141] libmachine: Reticulating splines...
	I0731 18:37:21.204740  413977 client.go:171] duration metric: took 26.552956298s to LocalClient.Create
	I0731 18:37:21.204769  413977 start.go:167] duration metric: took 26.553031792s to libmachine.API.Create "ha-326651"
	I0731 18:37:21.204782  413977 start.go:293] postStartSetup for "ha-326651-m03" (driver="kvm2")
	I0731 18:37:21.204798  413977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:37:21.204833  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.205107  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:37:21.205135  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:21.207425  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.207784  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.207813  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.207930  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.208124  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.208275  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.208431  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:21.291787  413977 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:37:21.296302  413977 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:37:21.296338  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:37:21.296453  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:37:21.296569  413977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:37:21.296584  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:37:21.296787  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:37:21.308040  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:37:21.333597  413977 start.go:296] duration metric: took 128.798747ms for postStartSetup
	I0731 18:37:21.333658  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetConfigRaw
	I0731 18:37:21.334235  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:21.337257  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.337609  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.337639  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.337918  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:37:21.338174  413977 start.go:128] duration metric: took 26.705444424s to createHost
	I0731 18:37:21.338200  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:21.340433  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.340727  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.340754  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.340982  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.341195  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.341366  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.341505  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.341643  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:21.341799  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:21.341808  413977 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:37:21.445509  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722451041.423659203
	
	I0731 18:37:21.445535  413977 fix.go:216] guest clock: 1722451041.423659203
	I0731 18:37:21.445546  413977 fix.go:229] Guest: 2024-07-31 18:37:21.423659203 +0000 UTC Remote: 2024-07-31 18:37:21.338186845 +0000 UTC m=+160.651491096 (delta=85.472358ms)
	I0731 18:37:21.445572  413977 fix.go:200] guest clock delta is within tolerance: 85.472358ms
	I0731 18:37:21.445577  413977 start.go:83] releasing machines lock for "ha-326651-m03", held for 26.812959209s
	I0731 18:37:21.445595  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.445940  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:21.449123  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.449558  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.449589  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.451678  413977 out.go:177] * Found network options:
	I0731 18:37:21.452816  413977 out.go:177]   - NO_PROXY=192.168.39.220,192.168.39.202
	W0731 18:37:21.453988  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 18:37:21.454008  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:37:21.454024  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.454513  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.454704  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.454791  413977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:37:21.454836  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	W0731 18:37:21.454904  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 18:37:21.454919  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:37:21.454983  413977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:37:21.454998  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:21.457457  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.457776  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.457801  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.457827  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.457943  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.458120  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.458239  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.458263  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.458272  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.458406  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:21.458441  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.458563  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.458678  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.458829  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:21.692148  413977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:37:21.698327  413977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:37:21.698395  413977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:37:21.718593  413977 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:37:21.718621  413977 start.go:495] detecting cgroup driver to use...
	I0731 18:37:21.718696  413977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:37:21.737923  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:37:21.753184  413977 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:37:21.753250  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:37:21.768064  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:37:21.784310  413977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:37:21.908161  413977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:37:22.083038  413977 docker.go:233] disabling docker service ...
	I0731 18:37:22.083124  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:37:22.098655  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:37:22.111970  413977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:37:22.232896  413977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:37:22.360924  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:37:22.376636  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:37:22.396880  413977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:37:22.396952  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.408234  413977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:37:22.408307  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.419945  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.431147  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.443100  413977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:37:22.454964  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.466805  413977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.485897  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.497176  413977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:37:22.507090  413977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:37:22.507165  413977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:37:22.521445  413977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:37:22.534157  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:37:22.676758  413977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:37:22.821966  413977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:37:22.822039  413977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:37:22.828177  413977 start.go:563] Will wait 60s for crictl version
	I0731 18:37:22.828256  413977 ssh_runner.go:195] Run: which crictl
	I0731 18:37:22.832241  413977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:37:22.873183  413977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:37:22.873288  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:37:22.903426  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:37:22.933611  413977 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:37:22.935044  413977 out.go:177]   - env NO_PROXY=192.168.39.220
	I0731 18:37:22.936181  413977 out.go:177]   - env NO_PROXY=192.168.39.220,192.168.39.202
	I0731 18:37:22.937307  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:22.940145  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:22.940560  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:22.940589  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:22.940759  413977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:37:22.945274  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:37:22.958156  413977 mustload.go:65] Loading cluster: ha-326651
	I0731 18:37:22.958434  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:37:22.958818  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:37:22.958887  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:37:22.974999  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0731 18:37:22.975528  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:37:22.976030  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:37:22.976067  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:37:22.976417  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:37:22.976611  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:37:22.978267  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:37:22.978610  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:37:22.978650  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:37:22.993692  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0731 18:37:22.994091  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:37:22.994512  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:37:22.994533  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:37:22.994868  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:37:22.995063  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:37:22.995233  413977 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.50
	I0731 18:37:22.995246  413977 certs.go:194] generating shared ca certs ...
	I0731 18:37:22.995265  413977 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:37:22.995412  413977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:37:22.995450  413977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:37:22.995460  413977 certs.go:256] generating profile certs ...
	I0731 18:37:22.995528  413977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:37:22.995552  413977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421
	I0731 18:37:22.995567  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.202 192.168.39.50 192.168.39.254]
	I0731 18:37:23.355528  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421 ...
	I0731 18:37:23.355565  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421: {Name:mkcf338dc55a624e933a8ac41432a2ed33c665ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:37:23.355767  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421 ...
	I0731 18:37:23.355786  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421: {Name:mk4c41ccc495694c66da6b0b64e94b8844359729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:37:23.355892  413977 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:37:23.356052  413977 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:37:23.356222  413977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:37:23.356244  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:37:23.356263  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:37:23.356280  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:37:23.356299  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:37:23.356320  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:37:23.356338  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:37:23.356359  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:37:23.356394  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:37:23.356463  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:37:23.356505  413977 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:37:23.356519  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:37:23.356555  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:37:23.356592  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:37:23.356620  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:37:23.356667  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:37:23.356696  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.356710  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.356723  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:23.356763  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:37:23.359908  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:23.360318  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:37:23.360345  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:23.360527  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:37:23.360758  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:37:23.360946  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:37:23.361102  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:37:23.436854  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 18:37:23.442449  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 18:37:23.455049  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 18:37:23.461033  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 18:37:23.473443  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 18:37:23.478346  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 18:37:23.489292  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 18:37:23.493509  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 18:37:23.503713  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 18:37:23.507831  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 18:37:23.519242  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 18:37:23.524301  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 18:37:23.534575  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:37:23.561693  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:37:23.586179  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:37:23.610694  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:37:23.636016  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 18:37:23.660606  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:37:23.685418  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:37:23.709921  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:37:23.734138  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:37:23.758612  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:37:23.783065  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:37:23.807696  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 18:37:23.824745  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 18:37:23.842808  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 18:37:23.860365  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 18:37:23.876879  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 18:37:23.893606  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 18:37:23.909694  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 18:37:23.925716  413977 ssh_runner.go:195] Run: openssl version
	I0731 18:37:23.931613  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:37:23.942303  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.947004  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.947056  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.952885  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:37:23.963671  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:37:23.974424  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.979179  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.979249  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.985074  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:37:23.995420  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:37:24.005627  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:24.010052  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:24.010148  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:24.015982  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:37:24.026995  413977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:37:24.031492  413977 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:37:24.031554  413977 kubeadm.go:934] updating node {m03 192.168.39.50 8443 v1.30.3 crio true true} ...
	I0731 18:37:24.031661  413977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:37:24.031693  413977 kube-vip.go:115] generating kube-vip config ...
	I0731 18:37:24.031735  413977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:37:24.047475  413977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:37:24.047569  413977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:37:24.047638  413977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:37:24.058198  413977 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 18:37:24.058264  413977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 18:37:24.069883  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 18:37:24.069892  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 18:37:24.069923  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:37:24.069938  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:37:24.069942  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 18:37:24.069961  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:37:24.070020  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:37:24.070030  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:37:24.080018  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 18:37:24.080065  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 18:37:24.080339  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 18:37:24.080362  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 18:37:24.094932  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:37:24.095016  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:37:24.208092  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 18:37:24.208143  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 18:37:24.979760  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 18:37:24.990405  413977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 18:37:25.007798  413977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:37:25.024522  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 18:37:25.041751  413977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:37:25.046230  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:37:25.059443  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:37:25.186943  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:37:25.207644  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:37:25.208083  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:37:25.208125  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:37:25.225643  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0731 18:37:25.226224  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:37:25.226824  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:37:25.226856  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:37:25.227192  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:37:25.227409  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:37:25.227582  413977 start.go:317] joinCluster: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:37:25.227764  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 18:37:25.227790  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:37:25.230925  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:25.231410  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:37:25.231452  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:25.231562  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:37:25.231748  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:37:25.231901  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:37:25.232063  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:37:25.531546  413977 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:37:25.531610  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fby254.sm0cc13ve70otyt8 --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m03 --control-plane --apiserver-advertise-address=192.168.39.50 --apiserver-bind-port=8443"
	I0731 18:37:49.867127  413977 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fby254.sm0cc13ve70otyt8 --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m03 --control-plane --apiserver-advertise-address=192.168.39.50 --apiserver-bind-port=8443": (24.335481808s)
	I0731 18:37:49.867179  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 18:37:50.378941  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326651-m03 minikube.k8s.io/updated_at=2024_07_31T18_37_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=ha-326651 minikube.k8s.io/primary=false
	I0731 18:37:50.527273  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326651-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 18:37:50.656890  413977 start.go:319] duration metric: took 25.429303959s to joinCluster
	I0731 18:37:50.657001  413977 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:37:50.657367  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:37:50.658501  413977 out.go:177] * Verifying Kubernetes components...
	I0731 18:37:50.660034  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:37:50.963606  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:37:51.019362  413977 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:37:51.019656  413977 kapi.go:59] client config for ha-326651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 18:37:51.019725  413977 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I0731 18:37:51.019987  413977 node_ready.go:35] waiting up to 6m0s for node "ha-326651-m03" to be "Ready" ...
	I0731 18:37:51.020079  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:51.020090  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:51.020101  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:51.020111  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:51.023093  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:37:51.520174  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:51.520197  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:51.520209  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:51.520216  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:51.523954  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:52.020852  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:52.020928  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:52.020947  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:52.020959  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:52.024734  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:52.520559  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:52.520589  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:52.520600  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:52.520605  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:52.523898  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:53.020720  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:53.020743  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:53.020751  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:53.020754  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:53.024464  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:53.025297  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:37:53.520563  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:53.520585  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:53.520593  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:53.520596  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:53.524043  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:54.021117  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:54.021143  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:54.021154  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:54.021161  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:54.024853  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:54.521245  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:54.521275  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:54.521286  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:54.521290  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:54.525584  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:37:55.020575  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:55.020599  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:55.020608  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:55.020619  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:55.024041  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:55.521241  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:55.521267  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:55.521278  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:55.521285  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:55.524183  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:37:55.525023  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:37:56.020919  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:56.020978  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:56.020990  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:56.020996  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:56.024793  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:56.520999  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:56.521030  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:56.521039  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:56.521045  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:56.524880  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:57.020558  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:57.020583  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:57.020592  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:57.020595  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:57.024064  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:57.521232  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:57.521259  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:57.521270  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:57.521276  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:57.525457  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:37:57.526333  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:37:58.020399  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:58.020422  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:58.020432  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:58.020437  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:58.023824  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:58.521059  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:58.521085  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:58.521096  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:58.521103  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:58.525144  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:37:59.021061  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:59.021083  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:59.021092  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:59.021095  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:59.024397  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:59.520972  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:59.520997  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:59.521005  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:59.521011  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:59.524720  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:00.020633  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:00.020673  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:00.020701  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:00.020706  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:00.024455  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:00.025230  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:00.520533  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:00.520557  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:00.520566  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:00.520570  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:00.523922  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:01.021019  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:01.021045  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:01.021054  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:01.021061  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:01.024556  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:01.520923  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:01.520950  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:01.520958  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:01.520964  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:01.524976  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:02.020970  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:02.020996  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:02.021007  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:02.021013  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:02.024935  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:02.025463  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:02.520959  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:02.520984  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:02.520993  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:02.520997  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:02.524873  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:03.020811  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:03.020833  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:03.020841  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:03.020845  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:03.024096  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:03.520978  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:03.520999  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:03.521008  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:03.521012  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:03.524688  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:04.020620  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:04.020644  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:04.020653  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:04.020658  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:04.024257  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:04.521191  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:04.521217  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:04.521227  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:04.521233  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:04.525225  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:04.525790  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:05.020940  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:05.020965  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:05.020973  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:05.020979  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:05.024447  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:05.520304  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:05.520329  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:05.520338  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:05.520343  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:05.523406  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:06.021018  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:06.021052  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:06.021062  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:06.021067  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:06.025126  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:06.520550  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:06.520575  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:06.520585  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:06.520591  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:06.523794  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:07.020912  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:07.020938  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:07.020947  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:07.020956  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:07.024848  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:07.025558  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:07.520941  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:07.520971  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:07.520980  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:07.520987  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:07.524549  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:08.020563  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:08.020586  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:08.020594  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:08.020598  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:08.024468  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:08.520334  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:08.520362  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:08.520388  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:08.520395  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:08.524025  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:09.021226  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:09.021251  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:09.021261  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:09.021266  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:09.024956  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:09.025585  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:09.521043  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:09.521075  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:09.521089  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:09.521092  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:09.524908  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.020899  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:10.020929  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.020940  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.020947  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.026586  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:38:10.027154  413977 node_ready.go:49] node "ha-326651-m03" has status "Ready":"True"
	I0731 18:38:10.027177  413977 node_ready.go:38] duration metric: took 19.007174611s for node "ha-326651-m03" to be "Ready" ...
	I0731 18:38:10.027188  413977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:38:10.027258  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:10.027268  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.027276  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.027280  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.035717  413977 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 18:38:10.043200  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.043298  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hsr7k
	I0731 18:38:10.043306  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.043314  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.043319  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.046582  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.047642  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.047659  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.047667  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.047672  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.050840  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.051495  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.051515  413977 pod_ready.go:81] duration metric: took 8.283282ms for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.051525  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.051600  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p2tfn
	I0731 18:38:10.051608  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.051615  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.051619  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.054430  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.055531  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.055547  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.055555  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.055559  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.058540  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.059077  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.059101  413977 pod_ready.go:81] duration metric: took 7.57011ms for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.059110  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.059168  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651
	I0731 18:38:10.059176  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.059183  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.059190  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.062091  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.062762  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.062778  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.062788  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.062794  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.066487  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.067039  413977 pod_ready.go:92] pod "etcd-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.067061  413977 pod_ready.go:81] duration metric: took 7.944797ms for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.067070  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.067142  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m02
	I0731 18:38:10.067149  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.067157  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.067161  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.071867  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:10.072519  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:10.072535  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.072543  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.072546  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.075294  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.075774  413977 pod_ready.go:92] pod "etcd-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.075799  413977 pod_ready.go:81] duration metric: took 8.721779ms for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.075812  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.221100  413977 request.go:629] Waited for 145.199845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m03
	I0731 18:38:10.221193  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m03
	I0731 18:38:10.221198  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.221208  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.221211  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.225082  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.421064  413977 request.go:629] Waited for 195.324231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:10.421150  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:10.421158  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.421168  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.421177  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.424696  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.425371  413977 pod_ready.go:92] pod "etcd-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.425390  413977 pod_ready.go:81] duration metric: took 349.57135ms for pod "etcd-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.425406  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.621716  413977 request.go:629] Waited for 196.22376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:38:10.621796  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:38:10.621805  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.621816  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.621834  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.627527  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:38:10.821394  413977 request.go:629] Waited for 193.164189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.821454  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.821459  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.821466  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.821471  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.824875  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.825388  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.825411  413977 pod_ready.go:81] duration metric: took 399.998459ms for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.825421  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.021931  413977 request.go:629] Waited for 196.409806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:38:11.021996  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:38:11.022001  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.022009  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.022013  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.028369  413977 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 18:38:11.221479  413977 request.go:629] Waited for 192.390158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:11.221571  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:11.221577  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.221591  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.221598  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.225466  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:11.226265  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:11.226285  413977 pod_ready.go:81] duration metric: took 400.858148ms for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.226295  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.421487  413977 request.go:629] Waited for 195.11476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m03
	I0731 18:38:11.421580  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m03
	I0731 18:38:11.421589  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.421600  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.421609  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.425699  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:11.621525  413977 request.go:629] Waited for 194.372228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:11.621602  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:11.621609  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.621617  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.621623  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.625368  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:11.625802  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:11.625820  413977 pod_ready.go:81] duration metric: took 399.518861ms for pod "kube-apiserver-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.625829  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.820973  413977 request.go:629] Waited for 195.0508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:38:11.821037  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:38:11.821043  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.821051  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.821057  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.825144  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:12.021362  413977 request.go:629] Waited for 195.36707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:12.021423  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:12.021428  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.021436  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.021442  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.024957  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.025602  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:12.025620  413977 pod_ready.go:81] duration metric: took 399.784534ms for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.025630  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.221703  413977 request.go:629] Waited for 195.978806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:38:12.221780  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:38:12.221787  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.221797  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.221805  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.225192  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.421416  413977 request.go:629] Waited for 195.354453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:12.421489  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:12.421495  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.421503  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.421507  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.425421  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.425916  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:12.425934  413977 pod_ready.go:81] duration metric: took 400.298077ms for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.425943  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.620972  413977 request.go:629] Waited for 194.932661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m03
	I0731 18:38:12.621053  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m03
	I0731 18:38:12.621059  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.621067  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.621073  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.624964  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.821068  413977 request.go:629] Waited for 195.318196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:12.821177  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:12.821189  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.821201  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.821209  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.825278  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:12.825995  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:12.826025  413977 pod_ready.go:81] duration metric: took 400.072019ms for pod "kube-controller-manager-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.826040  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.021215  413977 request.go:629] Waited for 195.095055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:38:13.021300  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:38:13.021306  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.021314  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.021321  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.025388  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:13.221247  413977 request.go:629] Waited for 195.267433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:13.221340  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:13.221346  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.221357  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.221366  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.225916  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:13.226871  413977 pod_ready.go:92] pod "kube-proxy-hg6sj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:13.226891  413977 pod_ready.go:81] duration metric: took 400.843747ms for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.226901  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lhprb" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.420997  413977 request.go:629] Waited for 193.980744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhprb
	I0731 18:38:13.421086  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhprb
	I0731 18:38:13.421094  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.421106  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.421117  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.424452  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:13.621523  413977 request.go:629] Waited for 196.378142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:13.621596  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:13.621603  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.621611  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.621616  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.625410  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:13.626111  413977 pod_ready.go:92] pod "kube-proxy-lhprb" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:13.626136  413977 pod_ready.go:81] duration metric: took 399.227736ms for pod "kube-proxy-lhprb" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.626145  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.821285  413977 request.go:629] Waited for 195.069421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:38:13.821356  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:38:13.821362  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.821370  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.821375  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.825063  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.020994  413977 request.go:629] Waited for 195.299514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.021082  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.021090  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.021098  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.021102  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.025085  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.026142  413977 pod_ready.go:92] pod "kube-proxy-stqb2" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:14.026167  413977 pod_ready.go:81] duration metric: took 400.013833ms for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.026179  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.221390  413977 request.go:629] Waited for 195.112801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:38:14.221451  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:38:14.221457  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.221467  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.221473  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.225827  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:14.421805  413977 request.go:629] Waited for 195.378126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:14.421877  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:14.421882  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.421890  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.421894  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.425460  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.426059  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:14.426086  413977 pod_ready.go:81] duration metric: took 399.894725ms for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.426099  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.621177  413977 request.go:629] Waited for 194.98251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:38:14.621273  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:38:14.621285  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.621295  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.621304  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.624878  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.821926  413977 request.go:629] Waited for 196.372921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.821992  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.821997  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.822006  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.822012  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.825529  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.826158  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:14.826179  413977 pod_ready.go:81] duration metric: took 400.068887ms for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.826188  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:15.021612  413977 request.go:629] Waited for 195.3289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m03
	I0731 18:38:15.021684  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m03
	I0731 18:38:15.021691  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.021700  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.021706  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.025857  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:15.220980  413977 request.go:629] Waited for 194.283799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:15.221085  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:15.221096  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.221107  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.221125  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.224598  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:15.225281  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:15.225300  413977 pod_ready.go:81] duration metric: took 399.106803ms for pod "kube-scheduler-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:15.225311  413977 pod_ready.go:38] duration metric: took 5.198111046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:38:15.225329  413977 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:38:15.225387  413977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:38:15.243672  413977 api_server.go:72] duration metric: took 24.586631178s to wait for apiserver process to appear ...
	I0731 18:38:15.243711  413977 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:38:15.243743  413977 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I0731 18:38:15.248624  413977 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I0731 18:38:15.248719  413977 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I0731 18:38:15.248730  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.248742  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.248754  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.249814  413977 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 18:38:15.249890  413977 api_server.go:141] control plane version: v1.30.3
	I0731 18:38:15.249906  413977 api_server.go:131] duration metric: took 6.187462ms to wait for apiserver health ...
	I0731 18:38:15.249921  413977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:38:15.421359  413977 request.go:629] Waited for 171.338586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.421420  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.421425  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.421433  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.421437  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.430726  413977 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 18:38:15.437067  413977 system_pods.go:59] 24 kube-system pods found
	I0731 18:38:15.437103  413977 system_pods.go:61] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:38:15.437109  413977 system_pods.go:61] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:38:15.437114  413977 system_pods.go:61] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:38:15.437124  413977 system_pods.go:61] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:38:15.437128  413977 system_pods.go:61] "etcd-ha-326651-m03" [ad71c742-0bb9-4137-b09a-fae975369a6a] Running
	I0731 18:38:15.437132  413977 system_pods.go:61] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:38:15.437137  413977 system_pods.go:61] "kindnet-86n7r" [6430d759-54b9-44cb-b0d1-b36311f326ec] Running
	I0731 18:38:15.437141  413977 system_pods.go:61] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:38:15.437145  413977 system_pods.go:61] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:38:15.437150  413977 system_pods.go:61] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:38:15.437155  413977 system_pods.go:61] "kube-apiserver-ha-326651-m03" [e12967b2-20f8-4c88-9f13-24b09828a0bc] Running
	I0731 18:38:15.437161  413977 system_pods.go:61] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:38:15.437166  413977 system_pods.go:61] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:38:15.437175  413977 system_pods.go:61] "kube-controller-manager-ha-326651-m03" [9173f006-38ea-4e55-a4b7-447fc467725f] Running
	I0731 18:38:15.437181  413977 system_pods.go:61] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:38:15.437187  413977 system_pods.go:61] "kube-proxy-lhprb" [8959da87-d806-49dc-be69-c495fb8de9ff] Running
	I0731 18:38:15.437193  413977 system_pods.go:61] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:38:15.437201  413977 system_pods.go:61] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:38:15.437206  413977 system_pods.go:61] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:38:15.437212  413977 system_pods.go:61] "kube-scheduler-ha-326651-m03" [047e337d-b07a-4ca2-893a-2310b5c53319] Running
	I0731 18:38:15.437218  413977 system_pods.go:61] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:38:15.437225  413977 system_pods.go:61] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:38:15.437230  413977 system_pods.go:61] "kube-vip-ha-326651-m03" [ed447ffb-4803-476f-9c83-d3573aeb2f8a] Running
	I0731 18:38:15.437237  413977 system_pods.go:61] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:38:15.437246  413977 system_pods.go:74] duration metric: took 187.316741ms to wait for pod list to return data ...
	I0731 18:38:15.437258  413977 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:38:15.621572  413977 request.go:629] Waited for 184.226167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:38:15.621654  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:38:15.621661  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.621673  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.621693  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.626128  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:15.626281  413977 default_sa.go:45] found service account: "default"
	I0731 18:38:15.626301  413977 default_sa.go:55] duration metric: took 189.035538ms for default service account to be created ...
	I0731 18:38:15.626313  413977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:38:15.821678  413977 request.go:629] Waited for 195.265839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.821749  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.821756  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.821768  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.821777  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.829324  413977 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 18:38:15.835995  413977 system_pods.go:86] 24 kube-system pods found
	I0731 18:38:15.836028  413977 system_pods.go:89] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:38:15.836036  413977 system_pods.go:89] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:38:15.836043  413977 system_pods.go:89] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:38:15.836051  413977 system_pods.go:89] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:38:15.836056  413977 system_pods.go:89] "etcd-ha-326651-m03" [ad71c742-0bb9-4137-b09a-fae975369a6a] Running
	I0731 18:38:15.836061  413977 system_pods.go:89] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:38:15.836067  413977 system_pods.go:89] "kindnet-86n7r" [6430d759-54b9-44cb-b0d1-b36311f326ec] Running
	I0731 18:38:15.836075  413977 system_pods.go:89] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:38:15.836082  413977 system_pods.go:89] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:38:15.836089  413977 system_pods.go:89] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:38:15.836097  413977 system_pods.go:89] "kube-apiserver-ha-326651-m03" [e12967b2-20f8-4c88-9f13-24b09828a0bc] Running
	I0731 18:38:15.836110  413977 system_pods.go:89] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:38:15.836121  413977 system_pods.go:89] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:38:15.836129  413977 system_pods.go:89] "kube-controller-manager-ha-326651-m03" [9173f006-38ea-4e55-a4b7-447fc467725f] Running
	I0731 18:38:15.836138  413977 system_pods.go:89] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:38:15.836144  413977 system_pods.go:89] "kube-proxy-lhprb" [8959da87-d806-49dc-be69-c495fb8de9ff] Running
	I0731 18:38:15.836151  413977 system_pods.go:89] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:38:15.836164  413977 system_pods.go:89] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:38:15.836173  413977 system_pods.go:89] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:38:15.836181  413977 system_pods.go:89] "kube-scheduler-ha-326651-m03" [047e337d-b07a-4ca2-893a-2310b5c53319] Running
	I0731 18:38:15.836190  413977 system_pods.go:89] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:38:15.836196  413977 system_pods.go:89] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:38:15.836205  413977 system_pods.go:89] "kube-vip-ha-326651-m03" [ed447ffb-4803-476f-9c83-d3573aeb2f8a] Running
	I0731 18:38:15.836211  413977 system_pods.go:89] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:38:15.836225  413977 system_pods.go:126] duration metric: took 209.903247ms to wait for k8s-apps to be running ...
	I0731 18:38:15.836238  413977 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:38:15.836291  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:38:15.854916  413977 system_svc.go:56] duration metric: took 18.666909ms WaitForService to wait for kubelet
	I0731 18:38:15.854954  413977 kubeadm.go:582] duration metric: took 25.197919918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:38:15.854984  413977 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:38:16.021541  413977 request.go:629] Waited for 166.470634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I0731 18:38:16.021634  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I0731 18:38:16.021645  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:16.021657  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:16.021663  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:16.026196  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:16.027352  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:38:16.027387  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:38:16.027401  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:38:16.027406  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:38:16.027411  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:38:16.027417  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:38:16.027423  413977 node_conditions.go:105] duration metric: took 172.433035ms to run NodePressure ...
	I0731 18:38:16.027438  413977 start.go:241] waiting for startup goroutines ...
	I0731 18:38:16.027462  413977 start.go:255] writing updated cluster config ...
	I0731 18:38:16.027756  413977 ssh_runner.go:195] Run: rm -f paused
	I0731 18:38:16.079451  413977 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:38:16.081596  413977 out.go:177] * Done! kubectl is now configured to use "ha-326651" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.659721274Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451318659686138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ca1be48-e8f0-4b25-ad5f-3c850492c584 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.660515520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dbae580a-2620-471c-baae-c4380ec5de8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.660579769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dbae580a-2620-471c-baae-c4380ec5de8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.660861161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dbae580a-2620-471c-baae-c4380ec5de8a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.705395862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12c45181-82e2-4b08-a00c-d3a9c9252ece name=/runtime.v1.RuntimeService/Version
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.705513946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12c45181-82e2-4b08-a00c-d3a9c9252ece name=/runtime.v1.RuntimeService/Version
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.707441369Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7862b68-3f18-4f6e-a893-c5feb8737983 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.707902077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451318707878243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7862b68-3f18-4f6e-a893-c5feb8737983 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.708520808Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24a597c3-5f2e-466d-b7f6-68f1f56d6d5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.708569282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24a597c3-5f2e-466d-b7f6-68f1f56d6d5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.708848723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24a597c3-5f2e-466d-b7f6-68f1f56d6d5c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.747631987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1817c8d1-d8cc-4a13-8b79-bd45f69758dc name=/runtime.v1.RuntimeService/Version
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.747724440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1817c8d1-d8cc-4a13-8b79-bd45f69758dc name=/runtime.v1.RuntimeService/Version
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.748863215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4baaa799-b486-4a0b-b920-b9a141b308d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.749575520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451318749544821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4baaa799-b486-4a0b-b920-b9a141b308d9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.750099025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f38e96dd-4a52-483b-8ce3-09b90801d5e6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.750221423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f38e96dd-4a52-483b-8ce3-09b90801d5e6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.750486291Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f38e96dd-4a52-483b-8ce3-09b90801d5e6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.788273368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bc30dbc-333b-4edc-9484-910744726e3a name=/runtime.v1.RuntimeService/Version
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.788368481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bc30dbc-333b-4edc-9484-910744726e3a name=/runtime.v1.RuntimeService/Version
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.789420527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3eda19b8-9869-499c-ae5d-005abf9ff92a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.789921865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451318789899133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3eda19b8-9869-499c-ae5d-005abf9ff92a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.790545523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9d4a2fb-fb44-4b6c-bd9c-43acb5f64921 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.790616799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9d4a2fb-fb44-4b6c-bd9c-43acb5f64921 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:41:58 ha-326651 crio[679]: time="2024-07-31 18:41:58.790952589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9d4a2fb-fb44-4b6c-bd9c-43acb5f64921 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f413f75c91415       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   25be6f24676d4       busybox-fc5497c4f-mknlp
	e606e2ddae6ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   bba5c545e084b       storage-provisioner
	68c50c65ea238       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   d651e4190c72a       coredns-7db6d8ff4d-hsr7k
	36f0c9b04bb2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   8a4d6fb11ec09       coredns-7db6d8ff4d-p2tfn
	81362a0e08184       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   8783b79032fde       kindnet-n7q8p
	5abc9372bd5fd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   4ed8613feb5ec       kube-proxy-hg6sj
	753a0f44161e1       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   e88355a2eb9b7       kube-vip-ha-326651
	a34e4c7715d1c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   8b070b038e8ac       kube-apiserver-ha-326651
	c40e9679adc35       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   4bc17ce1c9d2f       kube-scheduler-ha-326651
	44a042c1af736       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   4e3fbd67a5009       kube-controller-manager-ha-326651
	bd3d8dbedb96a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   1e765f5d9b3b0       etcd-ha-326651
	
	
	==> coredns [36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7] <==
	[INFO] 10.244.1.2:47344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198871s
	[INFO] 10.244.1.2:38776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144715s
	[INFO] 10.244.1.2:41083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003145331s
	[INFO] 10.244.1.2:43785 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150568s
	[INFO] 10.244.2.2:50028 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001760635s
	[INFO] 10.244.2.2:45304 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093222s
	[INFO] 10.244.2.2:36540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140369s
	[INFO] 10.244.0.4:43466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105451s
	[INFO] 10.244.0.4:43878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152423s
	[INFO] 10.244.0.4:49227 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079008s
	[INFO] 10.244.0.4:47339 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074836s
	[INFO] 10.244.0.4:60002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056953s
	[INFO] 10.244.1.2:60772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013788s
	[INFO] 10.244.1.2:34997 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091978s
	[INFO] 10.244.2.2:48501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137292s
	[INFO] 10.244.2.2:41701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113322s
	[INFO] 10.244.2.2:46841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192541s
	[INFO] 10.244.2.2:37979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066316s
	[INFO] 10.244.0.4:41261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093714s
	[INFO] 10.244.0.4:56128 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073138s
	[INFO] 10.244.1.2:60703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131127s
	[INFO] 10.244.1.2:47436 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239598s
	[INFO] 10.244.1.2:57459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181068s
	[INFO] 10.244.2.2:56898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174969s
	[INFO] 10.244.2.2:33868 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108451s
	
	
	==> coredns [68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33] <==
	[INFO] 10.244.2.2:57152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266349s
	[INFO] 10.244.2.2:48987 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.0027298s
	[INFO] 10.244.0.4:46694 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002128574s
	[INFO] 10.244.1.2:43669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113288s
	[INFO] 10.244.1.2:41521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133016s
	[INFO] 10.244.1.2:38952 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113284s
	[INFO] 10.244.2.2:37151 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132014s
	[INFO] 10.244.2.2:52172 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280659s
	[INFO] 10.244.2.2:43370 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363635s
	[INFO] 10.244.2.2:52527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117452s
	[INFO] 10.244.2.2:48596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117278s
	[INFO] 10.244.0.4:55816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001992063s
	[INFO] 10.244.0.4:33045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291238s
	[INFO] 10.244.0.4:37880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043099s
	[INFO] 10.244.1.2:40143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128845s
	[INFO] 10.244.1.2:48970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131569s
	[INFO] 10.244.0.4:57102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075402s
	[INFO] 10.244.0.4:54508 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004372s
	[INFO] 10.244.1.2:37053 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000194922s
	[INFO] 10.244.2.2:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129881s
	[INFO] 10.244.2.2:48437 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148815s
	[INFO] 10.244.0.4:50060 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094079s
	[INFO] 10.244.0.4:42736 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105289s
	[INFO] 10.244.0.4:43280 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052254s
	[INFO] 10.244.0.4:47658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074002s
	
	
	==> describe nodes <==
	Name:               ha-326651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_35_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:41:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-326651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 419482855e6c4b5d814fd4a3e9e4847f
	  System UUID:                41948285-5e6c-4b5d-814f-d4a3e9e4847f
	  Boot ID:                    87f7122f-f0c1-4fc2-964d-0fcb352e2937
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mknlp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m42s
	  kube-system                 coredns-7db6d8ff4d-hsr7k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m25s
	  kube-system                 coredns-7db6d8ff4d-p2tfn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m25s
	  kube-system                 etcd-ha-326651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m40s
	  kube-system                 kindnet-n7q8p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m26s
	  kube-system                 kube-apiserver-ha-326651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-controller-manager-ha-326651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 kube-proxy-hg6sj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-scheduler-ha-326651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 kube-vip-ha-326651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m24s  kube-proxy       
	  Normal  Starting                 6m39s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m39s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m39s  kubelet          Node ha-326651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s  kubelet          Node ha-326651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s  kubelet          Node ha-326651 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node ha-326651 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal  RegisteredNode           3m55s  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	
	
	Name:               ha-326651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_36_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:36:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:39:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-326651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e6699cde3924aaf94b25ab366c2acb8
	  System UUID:                2e6699cd-e392-4aaf-94b2-5ab366c2acb8
	  Boot ID:                    5c1932c2-b9e7-4809-bb21-3c186514aaf1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cs6t8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 etcd-ha-326651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m28s
	  kube-system                 kindnet-7l9l7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m30s
	  kube-system                 kube-apiserver-ha-326651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 kube-controller-manager-ha-326651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-proxy-stqb2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-scheduler-ha-326651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-vip-ha-326651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m30s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m30s)  kubelet          Node ha-326651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m30s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-326651-m02 status is now: NodeNotReady
	
	
	Name:               ha-326651-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_37_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:37:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:41:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:38:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    ha-326651-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5e4f78408f84c3ebbac53526a1e33d5
	  System UUID:                b5e4f784-08f8-4c3e-bbac-53526a1e33d5
	  Boot ID:                    2718d67d-347e-4fc9-8721-5da654c627d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lgg6t                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 etcd-ha-326651-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m11s
	  kube-system                 kindnet-86n7r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m13s
	  kube-system                 kube-apiserver-ha-326651-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-ha-326651-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-proxy-lhprb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  kube-system                 kube-scheduler-ha-326651-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-326651-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-326651-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-326651-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-326651-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	
	
	Name:               ha-326651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:38:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:41:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:38:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:38:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:38:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:39:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-326651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbaa436975294cf08fb310ae9ef7d64d
	  System UUID:                cbaa4369-7529-4cf0-8fb3-10ae9ef7d64d
	  Boot ID:                    1d6cf453-df7b-4ae4-8590-9f364b6fc76f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nmwh7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m3s
	  kube-system                 kube-proxy-2nq9j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node ha-326651-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-326651-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 18:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050750] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.802354] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.525465] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.557020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:35] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.063136] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063799] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.163467] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.151948] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.299453] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.312604] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.062376] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.195979] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +1.049374] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.105366] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.092707] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.338531] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.117589] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 18:36] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a] <==
	{"level":"warn","ts":"2024-07-31T18:41:59.11647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.125192Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.129801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.133037Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.133406Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.145821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.153193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.15995Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.170183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.175224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.189635Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.199188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.21039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.217419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.225614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.230686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.233265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.250412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.268438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.279438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.33269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.345868Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"31c72feb079851fc","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-31T18:41:59.345952Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"31c72feb079851fc","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: i/o timeout"}
	{"level":"warn","ts":"2024-07-31T18:41:59.350487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:41:59.352217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:41:59 up 7 min,  0 users,  load average: 0.07, 0.22, 0.15
	Linux ha-326651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821] <==
	I0731 18:41:19.764815       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:41:29.762403       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:41:29.762530       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:41:29.762777       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:41:29.762839       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:41:29.762952       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:41:29.762988       1 main.go:299] handling current node
	I0731 18:41:29.763043       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:41:29.763068       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:41:39.756414       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:41:39.756470       1 main.go:299] handling current node
	I0731 18:41:39.756487       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:41:39.756504       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:41:39.757597       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:41:39.757667       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:41:39.757787       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:41:39.757809       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:41:49.761048       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:41:49.761096       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:41:49.761282       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:41:49.761291       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:41:49.761340       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:41:49.761363       1 main.go:299] handling current node
	I0731 18:41:49.761374       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:41:49.761379       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9] <==
	E0731 18:38:22.210353       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36022: use of closed network connection
	E0731 18:38:22.397528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36046: use of closed network connection
	E0731 18:38:22.587433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36072: use of closed network connection
	E0731 18:38:22.788125       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36096: use of closed network connection
	E0731 18:38:22.984460       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36112: use of closed network connection
	E0731 18:38:23.156532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36122: use of closed network connection
	E0731 18:38:23.443380       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36140: use of closed network connection
	E0731 18:38:23.624792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36156: use of closed network connection
	E0731 18:38:23.812780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36170: use of closed network connection
	E0731 18:38:24.011428       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36186: use of closed network connection
	E0731 18:38:24.197406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36206: use of closed network connection
	E0731 18:38:24.394769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36222: use of closed network connection
	I0731 18:38:59.150272       1 trace.go:236] Trace[1721628771]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bb87a20f-b62f-4f72-ab5b-07163d19ba59,client:192.168.39.17,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (31-Jul-2024 18:38:58.473) (total time: 677ms):
	Trace[1721628771]: ---"watchCache locked acquired" 674ms (18:38:59.147)
	Trace[1721628771]: [677.11243ms] [677.11243ms] END
	I0731 18:38:59.154259       1 trace.go:236] Trace[323589200]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a29d14a2-066d-4ed9-b0a4-d9f8dd6cb7e6,client:192.168.39.17,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (31-Jul-2024 18:38:58.475) (total time: 679ms):
	Trace[323589200]: ---"watchCache locked acquired" 675ms (18:38:59.151)
	Trace[323589200]: [679.156339ms] [679.156339ms] END
	I0731 18:38:59.156080       1 trace.go:236] Trace[57705693]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8ea97a64-53dd-4514-8f03-4ce418c8f3f0,client:192.168.39.17,api-group:,api-version:v1,name:kube-proxy-2nq9j,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-2nq9j/status,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (31-Jul-2024 18:38:58.300) (total time: 855ms):
	Trace[57705693]: ["GuaranteedUpdate etcd3" audit-id:8ea97a64-53dd-4514-8f03-4ce418c8f3f0,key:/pods/kube-system/kube-proxy-2nq9j,type:*core.Pod,resource:pods 855ms (18:38:58.300)
	Trace[57705693]:  ---"Txn call completed" 362ms (18:38:58.665)
	Trace[57705693]:  ---"Txn call completed" 486ms (18:38:59.155)]
	Trace[57705693]: ---"About to apply patch" 362ms (18:38:58.666)
	Trace[57705693]: ---"Object stored in database" 486ms (18:38:59.155)
	Trace[57705693]: [855.742205ms] [855.742205ms] END
	
	
	==> kube-controller-manager [44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c] <==
	I0731 18:38:17.079987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.765766ms"
	I0731 18:38:17.187017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.825986ms"
	I0731 18:38:17.366206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.048099ms"
	I0731 18:38:17.470263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.912729ms"
	E0731 18:38:17.470317       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 18:38:17.470435       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.911µs"
	I0731 18:38:17.483466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.859µs"
	I0731 18:38:19.551693       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.686µs"
	I0731 18:38:20.362733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.568µs"
	I0731 18:38:21.039328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.741076ms"
	I0731 18:38:21.039498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.129µs"
	I0731 18:38:21.079118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.707212ms"
	I0731 18:38:21.079285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.776µs"
	I0731 18:38:21.156480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.571818ms"
	I0731 18:38:21.170495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.949659ms"
	I0731 18:38:21.170665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.991µs"
	E0731 18:38:55.866748       1 certificate_controller.go:146] Sync csr-97qqp failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-97qqp": the object has been modified; please apply your changes to the latest version and try again
	E0731 18:38:55.883803       1 certificate_controller.go:146] Sync csr-97qqp failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-97qqp": the object has been modified; please apply your changes to the latest version and try again
	I0731 18:38:56.129906       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326651-m04\" does not exist"
	I0731 18:38:56.148636       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326651-m04" podCIDRs=["10.244.3.0/24"]
	I0731 18:38:59.162188       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326651-m04"
	I0731 18:39:17.616258       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-326651-m04"
	I0731 18:40:14.207730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-326651-m04"
	I0731 18:40:14.312451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.09684ms"
	I0731 18:40:14.321029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="217.231µs"
	
	
	==> kube-proxy [5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd] <==
	I0731 18:35:34.995697       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:35:35.014905       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	I0731 18:35:35.098687       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:35:35.098748       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:35:35.098767       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:35:35.111456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:35:35.114373       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:35:35.114444       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:35:35.115994       1 config.go:192] "Starting service config controller"
	I0731 18:35:35.116291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:35:35.116386       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:35:35.116409       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:35:35.118469       1 config.go:319] "Starting node config controller"
	I0731 18:35:35.118498       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:35:35.217304       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 18:35:35.217423       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:35:35.218727       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd] <==
	W0731 18:35:18.555306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 18:35:18.555375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0731 18:35:20.928367       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 18:37:46.964925       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-86n7r\": pod kindnet-86n7r is already assigned to node \"ha-326651-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-86n7r" node="ha-326651-m03"
	E0731 18:37:46.965044       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6430d759-54b9-44cb-b0d1-b36311f326ec(kube-system/kindnet-86n7r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-86n7r"
	E0731 18:37:46.965068       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-86n7r\": pod kindnet-86n7r is already assigned to node \"ha-326651-m03\"" pod="kube-system/kindnet-86n7r"
	I0731 18:37:46.965105       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-86n7r" node="ha-326651-m03"
	I0731 18:38:17.014079       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="1e43b299-d997-4fdd-a163-a9bd587eec7e" pod="default/busybox-fc5497c4f-cs6t8" assumedNode="ha-326651-m02" currentNode="ha-326651-m03"
	E0731 18:38:17.034903       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cs6t8\": pod busybox-fc5497c4f-cs6t8 is already assigned to node \"ha-326651-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-cs6t8" node="ha-326651-m03"
	E0731 18:38:17.035002       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1e43b299-d997-4fdd-a163-a9bd587eec7e(default/busybox-fc5497c4f-cs6t8) was assumed on ha-326651-m03 but assigned to ha-326651-m02" pod="default/busybox-fc5497c4f-cs6t8"
	E0731 18:38:17.035033       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cs6t8\": pod busybox-fc5497c4f-cs6t8 is already assigned to node \"ha-326651-m02\"" pod="default/busybox-fc5497c4f-cs6t8"
	I0731 18:38:17.035091       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-cs6t8" node="ha-326651-m02"
	E0731 18:38:17.081837       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lgg6t\": pod busybox-fc5497c4f-lgg6t is already assigned to node \"ha-326651-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lgg6t" node="ha-326651-m03"
	E0731 18:38:17.081987       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8cd9612b-0afd-4dde-8ff1-6f8cd620a767(default/busybox-fc5497c4f-lgg6t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lgg6t"
	E0731 18:38:17.082020       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lgg6t\": pod busybox-fc5497c4f-lgg6t is already assigned to node \"ha-326651-m03\"" pod="default/busybox-fc5497c4f-lgg6t"
	I0731 18:38:17.082042       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lgg6t" node="ha-326651-m03"
	E0731 18:38:17.086599       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mknlp\": pod busybox-fc5497c4f-mknlp is already assigned to node \"ha-326651\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-mknlp" node="ha-326651"
	E0731 18:38:17.086669       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 15a3f7d9-8405-4304-87da-8962e2d81f4e(default/busybox-fc5497c4f-mknlp) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-mknlp"
	E0731 18:38:17.086689       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mknlp\": pod busybox-fc5497c4f-mknlp is already assigned to node \"ha-326651\"" pod="default/busybox-fc5497c4f-mknlp"
	I0731 18:38:17.086721       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-mknlp" node="ha-326651"
	E0731 18:38:56.213910       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nmwh7\": pod kindnet-nmwh7 is already assigned to node \"ha-326651-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nmwh7" node="ha-326651-m04"
	E0731 18:38:56.214255       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nmwh7\": pod kindnet-nmwh7 is already assigned to node \"ha-326651-m04\"" pod="kube-system/kindnet-nmwh7"
	I0731 18:38:56.216255       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nmwh7" node="ha-326651-m04"
	E0731 18:38:56.241628       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sk5s9\": pod kube-proxy-sk5s9 is already assigned to node \"ha-326651-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sk5s9" node="ha-326651-m04"
	E0731 18:38:56.241729       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sk5s9\": pod kube-proxy-sk5s9 is already assigned to node \"ha-326651-m04\"" pod="kube-system/kube-proxy-sk5s9"
	
	
	==> kubelet <==
	Jul 31 18:37:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:38:17 ha-326651 kubelet[1381]: I0731 18:38:17.063521    1381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=162.063431576 podStartE2EDuration="2m42.063431576s" podCreationTimestamp="2024-07-31 18:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 18:35:51.534671193 +0000 UTC m=+31.393934534" watchObservedRunningTime="2024-07-31 18:38:17.063431576 +0000 UTC m=+176.922694922"
	Jul 31 18:38:17 ha-326651 kubelet[1381]: I0731 18:38:17.063927    1381 topology_manager.go:215] "Topology Admit Handler" podUID="15a3f7d9-8405-4304-87da-8962e2d81f4e" podNamespace="default" podName="busybox-fc5497c4f-mknlp"
	Jul 31 18:38:17 ha-326651 kubelet[1381]: I0731 18:38:17.167757    1381 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls89z\" (UniqueName: \"kubernetes.io/projected/15a3f7d9-8405-4304-87da-8962e2d81f4e-kube-api-access-ls89z\") pod \"busybox-fc5497c4f-mknlp\" (UID: \"15a3f7d9-8405-4304-87da-8962e2d81f4e\") " pod="default/busybox-fc5497c4f-mknlp"
	Jul 31 18:38:20 ha-326651 kubelet[1381]: E0731 18:38:20.383989    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:38:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:38:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:38:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:38:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:38:22 ha-326651 kubelet[1381]: E0731 18:38:22.018940    1381 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34184->127.0.0.1:32875: write tcp 127.0.0.1:34184->127.0.0.1:32875: write: broken pipe
	Jul 31 18:39:20 ha-326651 kubelet[1381]: E0731 18:39:20.358535    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:39:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:39:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:39:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:39:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:40:20 ha-326651 kubelet[1381]: E0731 18:40:20.360365    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:40:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:40:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:40:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:40:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:41:20 ha-326651 kubelet[1381]: E0731 18:41:20.356206    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:41:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:41:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:41:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:41:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326651 -n ha-326651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-326651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (3.19606449s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:03.908305  418761 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:03.908485  418761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:03.908496  418761 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:03.908503  418761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:03.908685  418761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:03.908873  418761 out.go:298] Setting JSON to false
	I0731 18:42:03.908912  418761 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:03.909007  418761 notify.go:220] Checking for updates...
	I0731 18:42:03.909355  418761 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:03.909374  418761 status.go:255] checking status of ha-326651 ...
	I0731 18:42:03.909798  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:03.909874  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:03.925368  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0731 18:42:03.925769  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:03.926275  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:03.926298  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:03.926673  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:03.926922  418761 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:03.928465  418761 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:03.928489  418761 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:03.928803  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:03.928846  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:03.945409  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39997
	I0731 18:42:03.945875  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:03.946354  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:03.946378  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:03.946716  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:03.946992  418761 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:03.950253  418761 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:03.950708  418761 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:03.950774  418761 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:03.950866  418761 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:03.951251  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:03.951297  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:03.967743  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I0731 18:42:03.968218  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:03.968737  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:03.968763  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:03.969130  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:03.969403  418761 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:03.969620  418761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:03.969650  418761 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:03.973084  418761 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:03.973504  418761 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:03.973535  418761 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:03.973671  418761 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:03.973920  418761 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:03.974097  418761 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:03.974271  418761 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:04.057390  418761 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:04.064168  418761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:04.080492  418761 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:04.080524  418761 api_server.go:166] Checking apiserver status ...
	I0731 18:42:04.080570  418761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:04.096095  418761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:04.107008  418761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:04.107082  418761 ssh_runner.go:195] Run: ls
	I0731 18:42:04.112051  418761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:04.116688  418761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:04.116717  418761 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:04.116733  418761 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:04.116763  418761 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:04.117107  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:04.117153  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:04.132690  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44427
	I0731 18:42:04.133148  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:04.133626  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:04.133646  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:04.134012  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:04.134205  418761 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:04.135917  418761 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:42:04.135936  418761 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:04.136263  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:04.136313  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:04.152496  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36609
	I0731 18:42:04.152989  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:04.153474  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:04.153504  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:04.153837  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:04.154071  418761 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:42:04.157470  418761 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:04.157944  418761 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:04.157982  418761 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:04.158122  418761 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:04.158438  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:04.158478  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:04.174720  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0731 18:42:04.175181  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:04.175686  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:04.175702  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:04.176132  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:04.176330  418761 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:42:04.176570  418761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:04.176594  418761 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:42:04.179308  418761 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:04.179735  418761 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:04.179763  418761 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:04.179981  418761 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:42:04.180190  418761 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:42:04.180347  418761 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:42:04.180514  418761 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	W0731 18:42:06.692744  418761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:06.692877  418761 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0731 18:42:06.692923  418761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:06.692935  418761 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:42:06.692956  418761 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:06.692966  418761 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:06.693388  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:06.693461  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:06.709436  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0731 18:42:06.709908  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:06.710466  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:06.710495  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:06.710858  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:06.711137  418761 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:06.712949  418761 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:06.712972  418761 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:06.713265  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:06.713322  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:06.729357  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0731 18:42:06.729837  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:06.730296  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:06.730321  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:06.730677  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:06.730852  418761 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:06.733910  418761 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:06.734362  418761 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:06.734386  418761 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:06.734641  418761 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:06.734994  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:06.735040  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:06.751140  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I0731 18:42:06.751735  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:06.752347  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:06.752397  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:06.752808  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:06.753032  418761 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:06.753255  418761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:06.753278  418761 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:06.756669  418761 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:06.757192  418761 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:06.757217  418761 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:06.757370  418761 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:06.757580  418761 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:06.757751  418761 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:06.757938  418761 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:06.836443  418761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:06.852432  418761 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:06.852464  418761 api_server.go:166] Checking apiserver status ...
	I0731 18:42:06.852511  418761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:06.866826  418761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:06.876303  418761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:06.876387  418761 ssh_runner.go:195] Run: ls
	I0731 18:42:06.880599  418761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:06.888745  418761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:06.888778  418761 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:06.888791  418761 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:06.888811  418761 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:06.889212  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:06.889262  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:06.904541  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I0731 18:42:06.905050  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:06.905556  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:06.905619  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:06.906085  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:06.906295  418761 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:06.908129  418761 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:06.908168  418761 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:06.908617  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:06.908668  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:06.924873  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I0731 18:42:06.925340  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:06.925887  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:06.925914  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:06.926252  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:06.926434  418761 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:06.929450  418761 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:06.929912  418761 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:06.929956  418761 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:06.930151  418761 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:06.930434  418761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:06.930469  418761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:06.945864  418761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45583
	I0731 18:42:06.946352  418761 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:06.946890  418761 main.go:141] libmachine: Using API Version  1
	I0731 18:42:06.946913  418761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:06.947255  418761 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:06.947532  418761 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:06.947771  418761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:06.947799  418761 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:06.950837  418761 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:06.951379  418761 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:06.951407  418761 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:06.951590  418761 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:06.951761  418761 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:06.951942  418761 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:06.952093  418761 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:07.040629  418761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:07.056342  418761 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (5.076005555s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:08.174773  418862 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:08.175250  418862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:08.175304  418862 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:08.175323  418862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:08.175768  418862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:08.176244  418862 out.go:298] Setting JSON to false
	I0731 18:42:08.176282  418862 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:08.176352  418862 notify.go:220] Checking for updates...
	I0731 18:42:08.176801  418862 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:08.176823  418862 status.go:255] checking status of ha-326651 ...
	I0731 18:42:08.177269  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:08.177333  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:08.193127  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41453
	I0731 18:42:08.193743  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:08.194438  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:08.194476  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:08.194896  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:08.195105  418862 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:08.196692  418862 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:08.196720  418862 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:08.197050  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:08.197096  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:08.212280  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0731 18:42:08.212752  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:08.213272  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:08.213296  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:08.213606  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:08.213815  418862 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:08.216804  418862 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:08.217239  418862 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:08.217267  418862 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:08.217471  418862 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:08.217743  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:08.217783  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:08.233058  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I0731 18:42:08.233513  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:08.234045  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:08.234085  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:08.234406  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:08.234618  418862 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:08.234828  418862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:08.234854  418862 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:08.237839  418862 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:08.238564  418862 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:08.238604  418862 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:08.238717  418862 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:08.238938  418862 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:08.239092  418862 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:08.239233  418862 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:08.320896  418862 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:08.327880  418862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:08.348186  418862 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:08.348217  418862 api_server.go:166] Checking apiserver status ...
	I0731 18:42:08.348252  418862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:08.375916  418862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:08.388233  418862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:08.388336  418862 ssh_runner.go:195] Run: ls
	I0731 18:42:08.393247  418862 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:08.397563  418862 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:08.397591  418862 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:08.397604  418862 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:08.397622  418862 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:08.397924  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:08.397983  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:08.414584  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
	I0731 18:42:08.415113  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:08.415703  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:08.415728  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:08.416129  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:08.416358  418862 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:08.418133  418862 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:42:08.418166  418862 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:08.418477  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:08.418514  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:08.434662  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43889
	I0731 18:42:08.435072  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:08.435575  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:08.435598  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:08.435884  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:08.436117  418862 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:42:08.438942  418862 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:08.439377  418862 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:08.439411  418862 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:08.439479  418862 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:08.439836  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:08.439877  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:08.455503  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0731 18:42:08.455999  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:08.456580  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:08.456606  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:08.456986  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:08.457212  418862 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:42:08.457398  418862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:08.457414  418862 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:42:08.460310  418862 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:08.460791  418862 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:08.460815  418862 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:08.460989  418862 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:42:08.461180  418862 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:42:08.461296  418862 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:42:08.461419  418862 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	W0731 18:42:09.764775  418862 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:09.764878  418862 retry.go:31] will retry after 368.99526ms: dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:12.836730  418862 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:12.836879  418862 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0731 18:42:12.836911  418862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:12.836923  418862 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:42:12.837063  418862 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:12.837102  418862 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:12.837446  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:12.837490  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:12.853569  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0731 18:42:12.854048  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:12.854565  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:12.854587  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:12.854998  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:12.855253  418862 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:12.857068  418862 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:12.857089  418862 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:12.857463  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:12.857504  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:12.875352  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0731 18:42:12.875816  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:12.876369  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:12.876417  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:12.876833  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:12.877069  418862 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:12.879714  418862 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:12.880183  418862 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:12.880205  418862 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:12.880390  418862 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:12.880689  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:12.880724  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:12.896054  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0731 18:42:12.896571  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:12.897095  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:12.897123  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:12.897429  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:12.897623  418862 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:12.897829  418862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:12.897856  418862 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:12.900718  418862 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:12.901188  418862 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:12.901235  418862 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:12.901387  418862 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:12.901569  418862 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:12.901724  418862 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:12.901868  418862 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:12.984093  418862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:12.999973  418862 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:13.000016  418862 api_server.go:166] Checking apiserver status ...
	I0731 18:42:13.000084  418862 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:13.015533  418862 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:13.025999  418862 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:13.026062  418862 ssh_runner.go:195] Run: ls
	I0731 18:42:13.031049  418862 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:13.039434  418862 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:13.039465  418862 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:13.039474  418862 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:13.039504  418862 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:13.039841  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:13.039886  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:13.056688  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37707
	I0731 18:42:13.057098  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:13.057639  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:13.057672  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:13.058051  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:13.058291  418862 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:13.059870  418862 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:13.059888  418862 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:13.060190  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:13.060226  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:13.075681  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45855
	I0731 18:42:13.076141  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:13.076694  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:13.076716  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:13.077061  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:13.077316  418862 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:13.080310  418862 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:13.080839  418862 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:13.080866  418862 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:13.081099  418862 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:13.081428  418862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:13.081465  418862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:13.098493  418862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0731 18:42:13.098932  418862 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:13.099414  418862 main.go:141] libmachine: Using API Version  1
	I0731 18:42:13.099445  418862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:13.099815  418862 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:13.100035  418862 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:13.100248  418862 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:13.100269  418862 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:13.103266  418862 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:13.103756  418862 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:13.103780  418862 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:13.103902  418862 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:13.104105  418862 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:13.104270  418862 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:13.104462  418862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:13.192216  418862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:13.207304  418862 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (4.835763702s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:14.773384  418962 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:14.773531  418962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:14.773542  418962 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:14.773546  418962 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:14.773744  418962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:14.773938  418962 out.go:298] Setting JSON to false
	I0731 18:42:14.773968  418962 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:14.774037  418962 notify.go:220] Checking for updates...
	I0731 18:42:14.774462  418962 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:14.774484  418962 status.go:255] checking status of ha-326651 ...
	I0731 18:42:14.774952  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:14.775026  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:14.790582  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0731 18:42:14.791073  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:14.791665  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:14.791687  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:14.792120  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:14.792330  418962 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:14.794071  418962 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:14.794098  418962 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:14.794485  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:14.794527  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:14.809713  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0731 18:42:14.810151  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:14.810627  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:14.810643  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:14.811041  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:14.811242  418962 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:14.814230  418962 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:14.814648  418962 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:14.814684  418962 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:14.814803  418962 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:14.815200  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:14.815252  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:14.830960  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I0731 18:42:14.831417  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:14.831954  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:14.831986  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:14.832426  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:14.832628  418962 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:14.832875  418962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:14.832911  418962 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:14.835894  418962 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:14.836334  418962 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:14.836358  418962 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:14.836523  418962 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:14.836685  418962 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:14.836876  418962 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:14.837005  418962 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:14.920159  418962 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:14.926545  418962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:14.942943  418962 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:14.942977  418962 api_server.go:166] Checking apiserver status ...
	I0731 18:42:14.943022  418962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:14.958579  418962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:14.969123  418962 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:14.969203  418962 ssh_runner.go:195] Run: ls
	I0731 18:42:14.974200  418962 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:14.978533  418962 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:14.978562  418962 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:14.978573  418962 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:14.978598  418962 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:14.978997  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:14.979045  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:14.994743  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37403
	I0731 18:42:14.995178  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:14.995689  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:14.995711  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:14.996032  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:14.996246  418962 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:14.997916  418962 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:42:14.997932  418962 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:14.998326  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:14.998403  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:15.014576  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41415
	I0731 18:42:15.014978  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:15.015484  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:15.015508  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:15.015901  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:15.016124  418962 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:42:15.019141  418962 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:15.019620  418962 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:15.019646  418962 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:15.019828  418962 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:15.020170  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:15.020214  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:15.036044  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45977
	I0731 18:42:15.036553  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:15.037158  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:15.037176  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:15.037534  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:15.037753  418962 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:42:15.037994  418962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:15.038020  418962 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:42:15.040941  418962 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:15.041623  418962 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:15.041654  418962 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:15.041813  418962 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:42:15.042014  418962 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:42:15.042184  418962 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:42:15.042329  418962 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	W0731 18:42:15.908670  418962 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:15.908734  418962 retry.go:31] will retry after 220.882371ms: dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:19.204703  418962 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:19.204803  418962 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0731 18:42:19.204827  418962 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:19.204840  418962 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:42:19.204873  418962 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:19.204882  418962 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:19.205291  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:19.205349  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:19.221266  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I0731 18:42:19.221832  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:19.222358  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:19.222386  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:19.222710  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:19.222936  418962 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:19.224580  418962 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:19.224605  418962 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:19.224938  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:19.224975  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:19.240697  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I0731 18:42:19.241191  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:19.241708  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:19.241731  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:19.242044  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:19.242253  418962 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:19.245107  418962 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:19.245612  418962 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:19.245639  418962 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:19.245777  418962 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:19.246088  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:19.246123  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:19.262376  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33831
	I0731 18:42:19.262844  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:19.263354  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:19.263380  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:19.263721  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:19.263901  418962 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:19.264123  418962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:19.264154  418962 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:19.267124  418962 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:19.267612  418962 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:19.267642  418962 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:19.267829  418962 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:19.268005  418962 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:19.268137  418962 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:19.268272  418962 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:19.348433  418962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:19.363557  418962 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:19.363590  418962 api_server.go:166] Checking apiserver status ...
	I0731 18:42:19.363622  418962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:19.377640  418962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:19.388829  418962 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:19.388897  418962 ssh_runner.go:195] Run: ls
	I0731 18:42:19.393719  418962 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:19.399872  418962 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:19.399913  418962 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:19.399924  418962 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:19.399945  418962 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:19.400254  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:19.400291  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:19.416202  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I0731 18:42:19.416581  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:19.417030  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:19.417057  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:19.417390  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:19.417543  418962 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:19.419333  418962 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:19.419351  418962 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:19.419663  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:19.419705  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:19.435318  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0731 18:42:19.435780  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:19.436268  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:19.436286  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:19.436628  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:19.436899  418962 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:19.439895  418962 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:19.440361  418962 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:19.440441  418962 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:19.440583  418962 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:19.440928  418962 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:19.440973  418962 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:19.457172  418962 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45353
	I0731 18:42:19.457641  418962 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:19.458121  418962 main.go:141] libmachine: Using API Version  1
	I0731 18:42:19.458149  418962 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:19.458514  418962 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:19.458800  418962 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:19.459013  418962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:19.459039  418962 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:19.462219  418962 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:19.462855  418962 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:19.462895  418962 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:19.463149  418962 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:19.463313  418962 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:19.463461  418962 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:19.463585  418962 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:19.547990  418962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:19.563540  418962 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (4.826887739s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:21.055718  419078 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:21.055864  419078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:21.055875  419078 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:21.055881  419078 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:21.056164  419078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:21.056357  419078 out.go:298] Setting JSON to false
	I0731 18:42:21.056420  419078 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:21.056521  419078 notify.go:220] Checking for updates...
	I0731 18:42:21.056943  419078 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:21.056968  419078 status.go:255] checking status of ha-326651 ...
	I0731 18:42:21.057459  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:21.057513  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:21.073266  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41307
	I0731 18:42:21.073774  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:21.074397  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:21.074418  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:21.074753  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:21.074973  419078 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:21.076856  419078 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:21.076876  419078 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:21.077207  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:21.077256  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:21.093789  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I0731 18:42:21.094313  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:21.094794  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:21.094818  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:21.095206  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:21.095451  419078 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:21.098162  419078 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:21.098565  419078 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:21.098588  419078 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:21.098728  419078 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:21.099134  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:21.099206  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:21.114062  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
	I0731 18:42:21.114486  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:21.114958  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:21.114982  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:21.115318  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:21.115506  419078 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:21.115682  419078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:21.115715  419078 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:21.118388  419078 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:21.118903  419078 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:21.118926  419078 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:21.119021  419078 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:21.119224  419078 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:21.119373  419078 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:21.119477  419078 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:21.200479  419078 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:21.206997  419078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:21.222849  419078 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:21.222885  419078 api_server.go:166] Checking apiserver status ...
	I0731 18:42:21.222930  419078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:21.237333  419078 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:21.247980  419078 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:21.248034  419078 ssh_runner.go:195] Run: ls
	I0731 18:42:21.254283  419078 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:21.260663  419078 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:21.260693  419078 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:21.260706  419078 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:21.260727  419078 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:21.261147  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:21.261200  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:21.277182  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0731 18:42:21.277634  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:21.278215  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:21.278237  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:21.278610  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:21.278796  419078 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:21.280432  419078 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:42:21.280449  419078 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:21.280724  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:21.280755  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:21.296257  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0731 18:42:21.296725  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:21.297272  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:21.297295  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:21.297638  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:21.297834  419078 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:42:21.300911  419078 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:21.301366  419078 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:21.301395  419078 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:21.301513  419078 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:21.301829  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:21.301872  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:21.317454  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0731 18:42:21.317887  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:21.318389  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:21.318415  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:21.318740  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:21.318990  419078 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:42:21.319221  419078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:21.319245  419078 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:42:21.322150  419078 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:21.322533  419078 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:21.322559  419078 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:21.322699  419078 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:42:21.322849  419078 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:42:21.322991  419078 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:42:21.323123  419078 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	W0731 18:42:22.276616  419078 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:22.276670  419078 retry.go:31] will retry after 139.847567ms: dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:25.480632  419078 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:25.480748  419078 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0731 18:42:25.480773  419078 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:25.480785  419078 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:42:25.480814  419078 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:25.480827  419078 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:25.481159  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:25.481214  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:25.496590  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39161
	I0731 18:42:25.497074  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:25.497542  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:25.497573  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:25.497878  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:25.498058  419078 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:25.499675  419078 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:25.499694  419078 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:25.499992  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:25.500056  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:25.514781  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0731 18:42:25.515226  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:25.515700  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:25.515730  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:25.516069  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:25.516274  419078 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:25.518905  419078 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:25.519340  419078 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:25.519368  419078 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:25.519546  419078 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:25.519892  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:25.519969  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:25.534838  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44731
	I0731 18:42:25.535309  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:25.535819  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:25.535840  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:25.536154  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:25.536403  419078 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:25.536608  419078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:25.536635  419078 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:25.539351  419078 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:25.539798  419078 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:25.539828  419078 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:25.539970  419078 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:25.540129  419078 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:25.540289  419078 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:25.540412  419078 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:25.620277  419078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:25.637440  419078 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:25.637473  419078 api_server.go:166] Checking apiserver status ...
	I0731 18:42:25.637522  419078 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:25.652255  419078 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:25.663315  419078 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:25.663381  419078 ssh_runner.go:195] Run: ls
	I0731 18:42:25.667620  419078 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:25.673570  419078 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:25.673599  419078 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:25.673610  419078 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:25.673629  419078 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:25.673934  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:25.673969  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:25.689067  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45267
	I0731 18:42:25.689472  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:25.689936  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:25.689956  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:25.690359  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:25.690521  419078 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:25.692165  419078 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:25.692184  419078 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:25.692546  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:25.692594  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:25.707840  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I0731 18:42:25.708279  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:25.708776  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:25.708797  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:25.709119  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:25.709325  419078 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:25.712132  419078 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:25.712571  419078 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:25.712617  419078 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:25.712720  419078 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:25.713050  419078 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:25.713088  419078 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:25.727809  419078 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40039
	I0731 18:42:25.728220  419078 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:25.728779  419078 main.go:141] libmachine: Using API Version  1
	I0731 18:42:25.728806  419078 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:25.729147  419078 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:25.729337  419078 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:25.729532  419078 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:25.729557  419078 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:25.732321  419078 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:25.732738  419078 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:25.732766  419078 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:25.732903  419078 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:25.733070  419078 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:25.733241  419078 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:25.733381  419078 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:25.820135  419078 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:25.835245  419078 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (3.73604338s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:30.497857  419178 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:30.497971  419178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:30.497983  419178 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:30.497988  419178 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:30.498243  419178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:30.498472  419178 out.go:298] Setting JSON to false
	I0731 18:42:30.498503  419178 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:30.498609  419178 notify.go:220] Checking for updates...
	I0731 18:42:30.498927  419178 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:30.498948  419178 status.go:255] checking status of ha-326651 ...
	I0731 18:42:30.499493  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:30.499563  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:30.516215  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0731 18:42:30.516715  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:30.517380  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:30.517421  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:30.517895  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:30.518165  419178 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:30.519856  419178 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:30.519882  419178 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:30.520311  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:30.520390  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:30.536655  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45665
	I0731 18:42:30.537103  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:30.537569  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:30.537603  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:30.537932  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:30.538182  419178 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:30.541138  419178 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:30.541642  419178 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:30.541672  419178 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:30.541794  419178 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:30.542118  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:30.542160  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:30.557188  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33977
	I0731 18:42:30.557594  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:30.558263  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:30.558286  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:30.558592  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:30.558782  419178 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:30.559045  419178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:30.559078  419178 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:30.561692  419178 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:30.562145  419178 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:30.562187  419178 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:30.562357  419178 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:30.562539  419178 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:30.562700  419178 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:30.563004  419178 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:30.646167  419178 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:30.652873  419178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:30.667991  419178 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:30.668020  419178 api_server.go:166] Checking apiserver status ...
	I0731 18:42:30.668069  419178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:30.683583  419178 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:30.694030  419178 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:30.694097  419178 ssh_runner.go:195] Run: ls
	I0731 18:42:30.699532  419178 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:30.704127  419178 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:30.704149  419178 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:30.704159  419178 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:30.704175  419178 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:30.704495  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:30.704530  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:30.720072  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0731 18:42:30.720509  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:30.721033  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:30.721053  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:30.721367  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:30.721574  419178 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:30.722961  419178 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:42:30.722978  419178 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:30.723327  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:30.723392  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:30.738099  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I0731 18:42:30.738549  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:30.739094  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:30.739114  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:30.739434  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:30.739634  419178 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:42:30.742739  419178 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:30.743198  419178 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:30.743220  419178 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:30.743410  419178 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:42:30.743756  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:30.743798  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:30.758903  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
	I0731 18:42:30.759284  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:30.759731  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:30.759765  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:30.760059  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:30.760239  419178 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:42:30.760443  419178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:30.760475  419178 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:42:30.762929  419178 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:30.763330  419178 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:42:30.763356  419178 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:42:30.763510  419178 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:42:30.763725  419178 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:42:30.763931  419178 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:42:30.764099  419178 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	W0731 18:42:33.832657  419178 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.202:22: connect: no route to host
	W0731 18:42:33.832750  419178 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	E0731 18:42:33.832767  419178 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:33.832795  419178 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:42:33.832817  419178 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.202:22: connect: no route to host
	I0731 18:42:33.832830  419178 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:33.833143  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:33.833192  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:33.848991  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44439
	I0731 18:42:33.849459  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:33.850101  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:33.850128  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:33.850433  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:33.850648  419178 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:33.852016  419178 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:33.852045  419178 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:33.852348  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:33.852403  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:33.867755  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46839
	I0731 18:42:33.868122  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:33.868607  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:33.868632  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:33.868951  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:33.869145  419178 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:33.871894  419178 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:33.872339  419178 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:33.872361  419178 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:33.872509  419178 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:33.872812  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:33.872852  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:33.887762  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0731 18:42:33.888211  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:33.888698  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:33.888719  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:33.889067  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:33.889284  419178 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:33.889510  419178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:33.889544  419178 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:33.892054  419178 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:33.892548  419178 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:33.892572  419178 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:33.892749  419178 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:33.892948  419178 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:33.893092  419178 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:33.893244  419178 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:33.972919  419178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:33.987282  419178 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:33.987318  419178 api_server.go:166] Checking apiserver status ...
	I0731 18:42:33.987362  419178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:34.004683  419178 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:34.014638  419178 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:34.014709  419178 ssh_runner.go:195] Run: ls
	I0731 18:42:34.019244  419178 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:34.024117  419178 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:34.024145  419178 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:34.024154  419178 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:34.024171  419178 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:34.024515  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:34.024554  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:34.041227  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42915
	I0731 18:42:34.041708  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:34.042228  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:34.042258  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:34.042653  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:34.042882  419178 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:34.044451  419178 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:34.044468  419178 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:34.044766  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:34.044799  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:34.060539  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45573
	I0731 18:42:34.060934  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:34.061444  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:34.061466  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:34.061790  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:34.062001  419178 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:34.064639  419178 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:34.065180  419178 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:34.065220  419178 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:34.065328  419178 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:34.065653  419178 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:34.065700  419178 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:34.082248  419178 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44997
	I0731 18:42:34.082661  419178 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:34.083145  419178 main.go:141] libmachine: Using API Version  1
	I0731 18:42:34.083169  419178 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:34.083480  419178 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:34.083715  419178 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:34.083889  419178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:34.083915  419178 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:34.087005  419178 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:34.087493  419178 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:34.087519  419178 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:34.087739  419178 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:34.087930  419178 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:34.088096  419178 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:34.088263  419178 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:34.172266  419178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:34.186805  419178 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 7 (650.877894ms)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:40.558518  419316 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:40.558673  419316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:40.558686  419316 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:40.558693  419316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:40.558913  419316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:40.559094  419316 out.go:298] Setting JSON to false
	I0731 18:42:40.559120  419316 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:40.559229  419316 notify.go:220] Checking for updates...
	I0731 18:42:40.559523  419316 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:40.559539  419316 status.go:255] checking status of ha-326651 ...
	I0731 18:42:40.559976  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.560038  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.575551  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40623
	I0731 18:42:40.576068  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.576589  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.576608  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.576938  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.577143  419316 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:40.579162  419316 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:40.579194  419316 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:40.579607  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.579654  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.594899  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41427
	I0731 18:42:40.595319  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.595818  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.595839  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.596145  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.596339  419316 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:40.599380  419316 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:40.599797  419316 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:40.599835  419316 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:40.600018  419316 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:40.600461  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.600539  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.615728  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35529
	I0731 18:42:40.616147  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.616702  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.616731  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.617145  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.617360  419316 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:40.617538  419316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:40.617561  419316 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:40.620773  419316 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:40.621162  419316 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:40.621192  419316 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:40.621304  419316 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:40.621494  419316 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:40.621694  419316 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:40.621876  419316 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:40.705483  419316 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:40.713763  419316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:40.730824  419316 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:40.730855  419316 api_server.go:166] Checking apiserver status ...
	I0731 18:42:40.730898  419316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:40.754246  419316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:40.768731  419316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:40.768812  419316 ssh_runner.go:195] Run: ls
	I0731 18:42:40.773755  419316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:40.780032  419316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:40.780062  419316 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:40.780084  419316 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:40.780108  419316 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:40.780441  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.780487  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.795696  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I0731 18:42:40.796146  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.796686  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.796707  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.797022  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.797238  419316 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:40.798756  419316 status.go:330] ha-326651-m02 host status = "Stopped" (err=<nil>)
	I0731 18:42:40.798768  419316 status.go:343] host is not running, skipping remaining checks
	I0731 18:42:40.798790  419316 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:40.798815  419316 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:40.799148  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.799192  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.814960  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42665
	I0731 18:42:40.815511  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.816023  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.816049  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.816415  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.816631  419316 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:40.818427  419316 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:40.818450  419316 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:40.818824  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.818871  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.835426  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I0731 18:42:40.835849  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.836342  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.836368  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.836771  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.837007  419316 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:40.839977  419316 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:40.840416  419316 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:40.840448  419316 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:40.840604  419316 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:40.840933  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:40.840997  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:40.856317  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33691
	I0731 18:42:40.856911  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:40.857568  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:40.857593  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:40.858002  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:40.858226  419316 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:40.858433  419316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:40.858456  419316 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:40.861135  419316 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:40.861606  419316 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:40.861631  419316 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:40.861752  419316 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:40.861985  419316 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:40.862167  419316 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:40.862313  419316 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:40.948567  419316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:40.965921  419316 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:40.965953  419316 api_server.go:166] Checking apiserver status ...
	I0731 18:42:40.966023  419316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:40.982291  419316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:40.992760  419316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:40.992822  419316 ssh_runner.go:195] Run: ls
	I0731 18:42:40.997532  419316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:41.001889  419316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:41.001912  419316 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:41.001920  419316 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:41.001936  419316 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:41.002235  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:41.002271  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:41.017508  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0731 18:42:41.017918  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:41.018494  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:41.018515  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:41.018855  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:41.019061  419316 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:41.020675  419316 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:41.020694  419316 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:41.021014  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:41.021061  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:41.036221  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41363
	I0731 18:42:41.036678  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:41.037150  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:41.037171  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:41.037488  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:41.037715  419316 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:41.040618  419316 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:41.040983  419316 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:41.041009  419316 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:41.041106  419316 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:41.041473  419316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:41.041514  419316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:41.056713  419316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0731 18:42:41.057235  419316 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:41.057871  419316 main.go:141] libmachine: Using API Version  1
	I0731 18:42:41.057892  419316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:41.058232  419316 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:41.058419  419316 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:41.058612  419316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:41.058638  419316 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:41.061396  419316 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:41.061781  419316 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:41.061798  419316 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:41.061937  419316 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:41.062113  419316 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:41.062268  419316 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:41.062450  419316 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:41.147946  419316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:41.163144  419316 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 7 (642.431592ms)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:42:51.281151  419442 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:42:51.281281  419442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:51.281291  419442 out.go:304] Setting ErrFile to fd 2...
	I0731 18:42:51.281297  419442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:42:51.281565  419442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:42:51.281785  419442 out.go:298] Setting JSON to false
	I0731 18:42:51.281823  419442 mustload.go:65] Loading cluster: ha-326651
	I0731 18:42:51.281861  419442 notify.go:220] Checking for updates...
	I0731 18:42:51.282331  419442 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:42:51.282365  419442 status.go:255] checking status of ha-326651 ...
	I0731 18:42:51.282945  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.283004  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.298844  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44767
	I0731 18:42:51.299420  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.300054  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.300077  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.300545  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.300783  419442 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:42:51.302697  419442 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:42:51.302724  419442 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:51.303141  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.303183  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.318654  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38157
	I0731 18:42:51.319189  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.319700  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.319725  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.320087  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.320265  419442 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:42:51.323267  419442 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:51.323738  419442 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:51.323778  419442 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:51.323944  419442 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:42:51.324260  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.324316  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.339649  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I0731 18:42:51.340160  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.340735  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.340777  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.341076  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.341329  419442 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:42:51.341526  419442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:51.341560  419442 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:42:51.344093  419442 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:51.344595  419442 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:42:51.344628  419442 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:42:51.344770  419442 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:42:51.344961  419442 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:42:51.345164  419442 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:42:51.345414  419442 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:42:51.433143  419442 ssh_runner.go:195] Run: systemctl --version
	I0731 18:42:51.439661  419442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:51.458055  419442 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:51.458089  419442 api_server.go:166] Checking apiserver status ...
	I0731 18:42:51.458145  419442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:51.475427  419442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:42:51.492624  419442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:51.492680  419442 ssh_runner.go:195] Run: ls
	I0731 18:42:51.498183  419442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:51.503171  419442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:51.503200  419442 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:42:51.503214  419442 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:51.503240  419442 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:42:51.503603  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.503637  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.518443  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0731 18:42:51.518802  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.519261  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.519286  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.519611  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.519814  419442 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:42:51.521236  419442 status.go:330] ha-326651-m02 host status = "Stopped" (err=<nil>)
	I0731 18:42:51.521249  419442 status.go:343] host is not running, skipping remaining checks
	I0731 18:42:51.521257  419442 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:51.521285  419442 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:42:51.521582  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.521628  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.536350  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0731 18:42:51.536766  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.537247  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.537266  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.537565  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.537767  419442 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:42:51.539626  419442 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:42:51.539644  419442 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:51.539940  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.539977  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.555446  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33571
	I0731 18:42:51.555862  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.556369  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.556407  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.556719  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.556920  419442 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:42:51.559761  419442 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:51.560202  419442 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:51.560231  419442 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:51.560357  419442 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:42:51.560732  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.560771  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.575547  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39151
	I0731 18:42:51.576015  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.576531  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.576552  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.576876  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.577042  419442 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:42:51.577224  419442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:51.577241  419442 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:42:51.579919  419442 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:51.580336  419442 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:42:51.580363  419442 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:42:51.580529  419442 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:42:51.580711  419442 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:42:51.580842  419442 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:42:51.580963  419442 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:42:51.660783  419442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:51.675518  419442 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:42:51.675552  419442 api_server.go:166] Checking apiserver status ...
	I0731 18:42:51.675595  419442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:42:51.690971  419442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:42:51.700917  419442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:42:51.700971  419442 ssh_runner.go:195] Run: ls
	I0731 18:42:51.705510  419442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:42:51.710378  419442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:42:51.710409  419442 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:42:51.710437  419442 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:42:51.710459  419442 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:42:51.710892  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.710947  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.727232  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33083
	I0731 18:42:51.727664  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.728140  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.728163  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.728536  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.728764  419442 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:42:51.730246  419442 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:42:51.730274  419442 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:51.730554  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.730583  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.745231  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37521
	I0731 18:42:51.745636  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.746248  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.746271  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.746604  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.746833  419442 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:42:51.749454  419442 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:51.749870  419442 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:51.749897  419442 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:51.750025  419442 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:42:51.750370  419442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:42:51.750437  419442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:42:51.767382  419442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
	I0731 18:42:51.767870  419442 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:42:51.768472  419442 main.go:141] libmachine: Using API Version  1
	I0731 18:42:51.768510  419442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:42:51.768852  419442 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:42:51.769076  419442 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:42:51.769249  419442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:42:51.769266  419442 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:42:51.772118  419442 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:51.772588  419442 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:42:51.772614  419442 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:42:51.772778  419442 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:42:51.772961  419442 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:42:51.773137  419442 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:42:51.773291  419442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:42:51.856341  419442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:42:51.874697  419442 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 7 (618.461679ms)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-326651-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:43:02.275215  419546 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:43:02.275340  419546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:43:02.275349  419546 out.go:304] Setting ErrFile to fd 2...
	I0731 18:43:02.275356  419546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:43:02.275555  419546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:43:02.275754  419546 out.go:298] Setting JSON to false
	I0731 18:43:02.275787  419546 mustload.go:65] Loading cluster: ha-326651
	I0731 18:43:02.275905  419546 notify.go:220] Checking for updates...
	I0731 18:43:02.276304  419546 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:43:02.276325  419546 status.go:255] checking status of ha-326651 ...
	I0731 18:43:02.276862  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.276932  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.292555  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37887
	I0731 18:43:02.292971  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.293681  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.293727  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.294067  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.294252  419546 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:43:02.295809  419546 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:43:02.295827  419546 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:43:02.296126  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.296190  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.311046  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38595
	I0731 18:43:02.311461  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.311963  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.311988  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.312334  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.312547  419546 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:43:02.315470  419546 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:43:02.315917  419546 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:43:02.315949  419546 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:43:02.316084  419546 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:43:02.316408  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.316445  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.332340  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38923
	I0731 18:43:02.332785  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.333319  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.333343  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.333697  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.333903  419546 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:43:02.334117  419546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:43:02.334154  419546 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:43:02.336967  419546 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:43:02.337455  419546 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:43:02.337476  419546 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:43:02.337647  419546 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:43:02.337837  419546 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:43:02.338042  419546 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:43:02.338219  419546 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:43:02.424320  419546 ssh_runner.go:195] Run: systemctl --version
	I0731 18:43:02.430731  419546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:43:02.444576  419546 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:43:02.444607  419546 api_server.go:166] Checking apiserver status ...
	I0731 18:43:02.444653  419546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:43:02.457977  419546 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup
	W0731 18:43:02.467326  419546 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1221/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:43:02.467386  419546 ssh_runner.go:195] Run: ls
	I0731 18:43:02.471780  419546 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:43:02.478226  419546 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:43:02.478252  419546 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:43:02.478265  419546 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:43:02.478299  419546 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:43:02.478609  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.478657  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.494546  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33527
	I0731 18:43:02.495010  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.495547  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.495571  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.495921  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.496111  419546 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:43:02.497626  419546 status.go:330] ha-326651-m02 host status = "Stopped" (err=<nil>)
	I0731 18:43:02.497638  419546 status.go:343] host is not running, skipping remaining checks
	I0731 18:43:02.497644  419546 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:43:02.497659  419546 status.go:255] checking status of ha-326651-m03 ...
	I0731 18:43:02.498031  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.498083  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.513273  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0731 18:43:02.513700  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.514205  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.514228  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.514570  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.514776  419546 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:43:02.516288  419546 status.go:330] ha-326651-m03 host status = "Running" (err=<nil>)
	I0731 18:43:02.516305  419546 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:43:02.516700  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.516738  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.531921  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33593
	I0731 18:43:02.532428  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.532953  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.532980  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.533282  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.533452  419546 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:43:02.536241  419546 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:43:02.536689  419546 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:43:02.536712  419546 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:43:02.536864  419546 host.go:66] Checking if "ha-326651-m03" exists ...
	I0731 18:43:02.537192  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.537248  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.552595  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35489
	I0731 18:43:02.553057  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.553529  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.553548  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.553896  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.554125  419546 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:43:02.554337  419546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:43:02.554359  419546 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:43:02.557976  419546 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:43:02.558458  419546 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:43:02.558492  419546 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:43:02.558734  419546 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:43:02.558967  419546 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:43:02.559145  419546 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:43:02.559302  419546 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:43:02.636781  419546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:43:02.651446  419546 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:43:02.651476  419546 api_server.go:166] Checking apiserver status ...
	I0731 18:43:02.651518  419546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:43:02.666110  419546 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup
	W0731 18:43:02.675941  419546 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1602/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:43:02.675997  419546 ssh_runner.go:195] Run: ls
	I0731 18:43:02.680407  419546 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:43:02.684987  419546 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:43:02.685013  419546 status.go:422] ha-326651-m03 apiserver status = Running (err=<nil>)
	I0731 18:43:02.685025  419546 status.go:257] ha-326651-m03 status: &{Name:ha-326651-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:43:02.685046  419546 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:43:02.685392  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.685435  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.700534  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0731 18:43:02.700989  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.701462  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.701484  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.701782  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.701998  419546 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:43:02.703669  419546 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:43:02.703684  419546 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:43:02.703987  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.704059  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.718891  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0731 18:43:02.719333  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.719878  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.719900  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.720211  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.720451  419546 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:43:02.723075  419546 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:43:02.723480  419546 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:43:02.723503  419546 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:43:02.723639  419546 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:43:02.723927  419546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:02.723959  419546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:02.738751  419546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41409
	I0731 18:43:02.739211  419546 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:02.739699  419546 main.go:141] libmachine: Using API Version  1
	I0731 18:43:02.739724  419546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:02.740026  419546 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:02.740213  419546 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:43:02.740423  419546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:43:02.740449  419546 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:43:02.743144  419546 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:43:02.743701  419546 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:43:02.743751  419546 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:43:02.743788  419546 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:43:02.743978  419546 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:43:02.744144  419546 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:43:02.744289  419546 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:43:02.828918  419546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:43:02.846846  419546 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326651 -n ha-326651
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-326651 logs -n 25: (1.465578344s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651:/home/docker/cp-test_ha-326651-m03_ha-326651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651 sudo cat                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m04 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp testdata/cp-test.txt                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651:/home/docker/cp-test_ha-326651-m04_ha-326651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651 sudo cat                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03:/home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m03 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-326651 node stop m02 -v=7                                                     | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-326651 node start m02 -v=7                                                    | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:34:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:34:40.723848  413977 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:34:40.724353  413977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:40.724384  413977 out.go:304] Setting ErrFile to fd 2...
	I0731 18:34:40.724393  413977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:40.724879  413977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:34:40.725848  413977 out.go:298] Setting JSON to false
	I0731 18:34:40.726740  413977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8224,"bootTime":1722442657,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:34:40.726803  413977 start.go:139] virtualization: kvm guest
	I0731 18:34:40.728848  413977 out.go:177] * [ha-326651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:34:40.730458  413977 notify.go:220] Checking for updates...
	I0731 18:34:40.730468  413977 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:34:40.731857  413977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:34:40.733021  413977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:34:40.734226  413977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:40.735716  413977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:34:40.737064  413977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:34:40.738470  413977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:34:40.774904  413977 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 18:34:40.776272  413977 start.go:297] selected driver: kvm2
	I0731 18:34:40.776288  413977 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:34:40.776300  413977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:34:40.777003  413977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:34:40.777074  413977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:34:40.792816  413977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:34:40.792877  413977 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:34:40.793118  413977 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:34:40.793184  413977 cni.go:84] Creating CNI manager for ""
	I0731 18:34:40.793195  413977 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 18:34:40.793201  413977 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 18:34:40.793264  413977 start.go:340] cluster config:
	{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 18:34:40.793364  413977 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:34:40.795141  413977 out.go:177] * Starting "ha-326651" primary control-plane node in "ha-326651" cluster
	I0731 18:34:40.796525  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:34:40.796567  413977 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:34:40.796577  413977 cache.go:56] Caching tarball of preloaded images
	I0731 18:34:40.796664  413977 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:34:40.796674  413977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:34:40.796975  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:34:40.796993  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json: {Name:mk70ea6858e5325492e374713de5d9e959a0e0da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:34:40.797122  413977 start.go:360] acquireMachinesLock for ha-326651: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:34:40.797149  413977 start.go:364] duration metric: took 15.324µs to acquireMachinesLock for "ha-326651"
	I0731 18:34:40.797166  413977 start.go:93] Provisioning new machine with config: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:34:40.797218  413977 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 18:34:40.798819  413977 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:34:40.798978  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:34:40.799025  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:34:40.813845  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I0731 18:34:40.814345  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:34:40.814896  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:34:40.814919  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:34:40.815368  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:34:40.815558  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:34:40.815739  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:34:40.815886  413977 start.go:159] libmachine.API.Create for "ha-326651" (driver="kvm2")
	I0731 18:34:40.815909  413977 client.go:168] LocalClient.Create starting
	I0731 18:34:40.815942  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:34:40.815978  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:34:40.815994  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:34:40.816067  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:34:40.816086  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:34:40.816099  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:34:40.816115  413977 main.go:141] libmachine: Running pre-create checks...
	I0731 18:34:40.816133  413977 main.go:141] libmachine: (ha-326651) Calling .PreCreateCheck
	I0731 18:34:40.816538  413977 main.go:141] libmachine: (ha-326651) Calling .GetConfigRaw
	I0731 18:34:40.816974  413977 main.go:141] libmachine: Creating machine...
	I0731 18:34:40.816991  413977 main.go:141] libmachine: (ha-326651) Calling .Create
	I0731 18:34:40.817124  413977 main.go:141] libmachine: (ha-326651) Creating KVM machine...
	I0731 18:34:40.818362  413977 main.go:141] libmachine: (ha-326651) DBG | found existing default KVM network
	I0731 18:34:40.819107  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:40.818971  414000 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0731 18:34:40.819130  413977 main.go:141] libmachine: (ha-326651) DBG | created network xml: 
	I0731 18:34:40.819163  413977 main.go:141] libmachine: (ha-326651) DBG | <network>
	I0731 18:34:40.819200  413977 main.go:141] libmachine: (ha-326651) DBG |   <name>mk-ha-326651</name>
	I0731 18:34:40.819214  413977 main.go:141] libmachine: (ha-326651) DBG |   <dns enable='no'/>
	I0731 18:34:40.819224  413977 main.go:141] libmachine: (ha-326651) DBG |   
	I0731 18:34:40.819237  413977 main.go:141] libmachine: (ha-326651) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 18:34:40.819247  413977 main.go:141] libmachine: (ha-326651) DBG |     <dhcp>
	I0731 18:34:40.819257  413977 main.go:141] libmachine: (ha-326651) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 18:34:40.819270  413977 main.go:141] libmachine: (ha-326651) DBG |     </dhcp>
	I0731 18:34:40.819283  413977 main.go:141] libmachine: (ha-326651) DBG |   </ip>
	I0731 18:34:40.819293  413977 main.go:141] libmachine: (ha-326651) DBG |   
	I0731 18:34:40.819303  413977 main.go:141] libmachine: (ha-326651) DBG | </network>
	I0731 18:34:40.819312  413977 main.go:141] libmachine: (ha-326651) DBG | 
	I0731 18:34:40.824475  413977 main.go:141] libmachine: (ha-326651) DBG | trying to create private KVM network mk-ha-326651 192.168.39.0/24...
	I0731 18:34:40.890090  413977 main.go:141] libmachine: (ha-326651) DBG | private KVM network mk-ha-326651 192.168.39.0/24 created
	I0731 18:34:40.890164  413977 main.go:141] libmachine: (ha-326651) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651 ...
	I0731 18:34:40.890192  413977 main.go:141] libmachine: (ha-326651) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:34:40.890228  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:40.890040  414000 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:40.890263  413977 main.go:141] libmachine: (ha-326651) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:34:41.157266  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:41.157110  414000 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa...
	I0731 18:34:41.217550  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:41.217377  414000 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/ha-326651.rawdisk...
	I0731 18:34:41.217595  413977 main.go:141] libmachine: (ha-326651) DBG | Writing magic tar header
	I0731 18:34:41.217609  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651 (perms=drwx------)
	I0731 18:34:41.217624  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:34:41.217637  413977 main.go:141] libmachine: (ha-326651) DBG | Writing SSH key tar header
	I0731 18:34:41.217644  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:41.217490  414000 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651 ...
	I0731 18:34:41.217662  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651
	I0731 18:34:41.217680  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:34:41.217699  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:41.217710  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:34:41.217720  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:34:41.217726  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:34:41.217736  413977 main.go:141] libmachine: (ha-326651) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:34:41.217740  413977 main.go:141] libmachine: (ha-326651) Creating domain...
	I0731 18:34:41.217755  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:34:41.217764  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:34:41.217770  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:34:41.217777  413977 main.go:141] libmachine: (ha-326651) DBG | Checking permissions on dir: /home
	I0731 18:34:41.217786  413977 main.go:141] libmachine: (ha-326651) DBG | Skipping /home - not owner
	I0731 18:34:41.218814  413977 main.go:141] libmachine: (ha-326651) define libvirt domain using xml: 
	I0731 18:34:41.218847  413977 main.go:141] libmachine: (ha-326651) <domain type='kvm'>
	I0731 18:34:41.218864  413977 main.go:141] libmachine: (ha-326651)   <name>ha-326651</name>
	I0731 18:34:41.218876  413977 main.go:141] libmachine: (ha-326651)   <memory unit='MiB'>2200</memory>
	I0731 18:34:41.218885  413977 main.go:141] libmachine: (ha-326651)   <vcpu>2</vcpu>
	I0731 18:34:41.218910  413977 main.go:141] libmachine: (ha-326651)   <features>
	I0731 18:34:41.218920  413977 main.go:141] libmachine: (ha-326651)     <acpi/>
	I0731 18:34:41.218927  413977 main.go:141] libmachine: (ha-326651)     <apic/>
	I0731 18:34:41.218935  413977 main.go:141] libmachine: (ha-326651)     <pae/>
	I0731 18:34:41.218946  413977 main.go:141] libmachine: (ha-326651)     
	I0731 18:34:41.218970  413977 main.go:141] libmachine: (ha-326651)   </features>
	I0731 18:34:41.218991  413977 main.go:141] libmachine: (ha-326651)   <cpu mode='host-passthrough'>
	I0731 18:34:41.218999  413977 main.go:141] libmachine: (ha-326651)   
	I0731 18:34:41.219010  413977 main.go:141] libmachine: (ha-326651)   </cpu>
	I0731 18:34:41.219020  413977 main.go:141] libmachine: (ha-326651)   <os>
	I0731 18:34:41.219027  413977 main.go:141] libmachine: (ha-326651)     <type>hvm</type>
	I0731 18:34:41.219036  413977 main.go:141] libmachine: (ha-326651)     <boot dev='cdrom'/>
	I0731 18:34:41.219041  413977 main.go:141] libmachine: (ha-326651)     <boot dev='hd'/>
	I0731 18:34:41.219046  413977 main.go:141] libmachine: (ha-326651)     <bootmenu enable='no'/>
	I0731 18:34:41.219050  413977 main.go:141] libmachine: (ha-326651)   </os>
	I0731 18:34:41.219055  413977 main.go:141] libmachine: (ha-326651)   <devices>
	I0731 18:34:41.219063  413977 main.go:141] libmachine: (ha-326651)     <disk type='file' device='cdrom'>
	I0731 18:34:41.219073  413977 main.go:141] libmachine: (ha-326651)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/boot2docker.iso'/>
	I0731 18:34:41.219084  413977 main.go:141] libmachine: (ha-326651)       <target dev='hdc' bus='scsi'/>
	I0731 18:34:41.219101  413977 main.go:141] libmachine: (ha-326651)       <readonly/>
	I0731 18:34:41.219129  413977 main.go:141] libmachine: (ha-326651)     </disk>
	I0731 18:34:41.219141  413977 main.go:141] libmachine: (ha-326651)     <disk type='file' device='disk'>
	I0731 18:34:41.219153  413977 main.go:141] libmachine: (ha-326651)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:34:41.219172  413977 main.go:141] libmachine: (ha-326651)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/ha-326651.rawdisk'/>
	I0731 18:34:41.219178  413977 main.go:141] libmachine: (ha-326651)       <target dev='hda' bus='virtio'/>
	I0731 18:34:41.219196  413977 main.go:141] libmachine: (ha-326651)     </disk>
	I0731 18:34:41.219203  413977 main.go:141] libmachine: (ha-326651)     <interface type='network'>
	I0731 18:34:41.219213  413977 main.go:141] libmachine: (ha-326651)       <source network='mk-ha-326651'/>
	I0731 18:34:41.219220  413977 main.go:141] libmachine: (ha-326651)       <model type='virtio'/>
	I0731 18:34:41.219225  413977 main.go:141] libmachine: (ha-326651)     </interface>
	I0731 18:34:41.219230  413977 main.go:141] libmachine: (ha-326651)     <interface type='network'>
	I0731 18:34:41.219248  413977 main.go:141] libmachine: (ha-326651)       <source network='default'/>
	I0731 18:34:41.219268  413977 main.go:141] libmachine: (ha-326651)       <model type='virtio'/>
	I0731 18:34:41.219280  413977 main.go:141] libmachine: (ha-326651)     </interface>
	I0731 18:34:41.219290  413977 main.go:141] libmachine: (ha-326651)     <serial type='pty'>
	I0731 18:34:41.219301  413977 main.go:141] libmachine: (ha-326651)       <target port='0'/>
	I0731 18:34:41.219311  413977 main.go:141] libmachine: (ha-326651)     </serial>
	I0731 18:34:41.219326  413977 main.go:141] libmachine: (ha-326651)     <console type='pty'>
	I0731 18:34:41.219341  413977 main.go:141] libmachine: (ha-326651)       <target type='serial' port='0'/>
	I0731 18:34:41.219350  413977 main.go:141] libmachine: (ha-326651)     </console>
	I0731 18:34:41.219357  413977 main.go:141] libmachine: (ha-326651)     <rng model='virtio'>
	I0731 18:34:41.219367  413977 main.go:141] libmachine: (ha-326651)       <backend model='random'>/dev/random</backend>
	I0731 18:34:41.219374  413977 main.go:141] libmachine: (ha-326651)     </rng>
	I0731 18:34:41.219382  413977 main.go:141] libmachine: (ha-326651)     
	I0731 18:34:41.219388  413977 main.go:141] libmachine: (ha-326651)     
	I0731 18:34:41.219396  413977 main.go:141] libmachine: (ha-326651)   </devices>
	I0731 18:34:41.219406  413977 main.go:141] libmachine: (ha-326651) </domain>
	I0731 18:34:41.219420  413977 main.go:141] libmachine: (ha-326651) 
	I0731 18:34:41.223555  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:ee:73:0f in network default
	I0731 18:34:41.224056  413977 main.go:141] libmachine: (ha-326651) Ensuring networks are active...
	I0731 18:34:41.224078  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:41.224700  413977 main.go:141] libmachine: (ha-326651) Ensuring network default is active
	I0731 18:34:41.224971  413977 main.go:141] libmachine: (ha-326651) Ensuring network mk-ha-326651 is active
	I0731 18:34:41.225395  413977 main.go:141] libmachine: (ha-326651) Getting domain xml...
	I0731 18:34:41.226030  413977 main.go:141] libmachine: (ha-326651) Creating domain...
	I0731 18:34:42.424439  413977 main.go:141] libmachine: (ha-326651) Waiting to get IP...
	I0731 18:34:42.425190  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:42.425634  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:42.425656  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:42.425576  414000 retry.go:31] will retry after 203.424539ms: waiting for machine to come up
	I0731 18:34:42.631245  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:42.631765  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:42.631794  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:42.631718  414000 retry.go:31] will retry after 387.742735ms: waiting for machine to come up
	I0731 18:34:43.021313  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:43.021797  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:43.021827  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:43.021745  414000 retry.go:31] will retry after 469.359884ms: waiting for machine to come up
	I0731 18:34:43.492410  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:43.493086  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:43.493110  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:43.493000  414000 retry.go:31] will retry after 395.781269ms: waiting for machine to come up
	I0731 18:34:43.890674  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:43.891079  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:43.891100  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:43.891035  414000 retry.go:31] will retry after 734.285922ms: waiting for machine to come up
	I0731 18:34:44.626848  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:44.627387  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:44.627420  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:44.627317  414000 retry.go:31] will retry after 862.205057ms: waiting for machine to come up
	I0731 18:34:45.491435  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:45.491917  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:45.491947  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:45.491846  414000 retry.go:31] will retry after 1.106594488s: waiting for machine to come up
	I0731 18:34:46.599797  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:46.600340  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:46.600396  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:46.600270  414000 retry.go:31] will retry after 1.454701519s: waiting for machine to come up
	I0731 18:34:48.057051  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:48.057432  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:48.057458  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:48.057376  414000 retry.go:31] will retry after 1.796635335s: waiting for machine to come up
	I0731 18:34:49.856244  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:49.856665  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:49.856691  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:49.856622  414000 retry.go:31] will retry after 1.762364281s: waiting for machine to come up
	I0731 18:34:51.620624  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:51.621132  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:51.621169  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:51.621059  414000 retry.go:31] will retry after 2.662012393s: waiting for machine to come up
	I0731 18:34:54.286074  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:54.286542  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:54.286567  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:54.286494  414000 retry.go:31] will retry after 3.629071767s: waiting for machine to come up
	I0731 18:34:57.917456  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:34:57.917985  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find current IP address of domain ha-326651 in network mk-ha-326651
	I0731 18:34:57.918010  413977 main.go:141] libmachine: (ha-326651) DBG | I0731 18:34:57.917960  414000 retry.go:31] will retry after 3.371083275s: waiting for machine to come up
	I0731 18:35:01.290529  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.291019  413977 main.go:141] libmachine: (ha-326651) Found IP for machine: 192.168.39.220
	I0731 18:35:01.291048  413977 main.go:141] libmachine: (ha-326651) Reserving static IP address...
	I0731 18:35:01.291061  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has current primary IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.291356  413977 main.go:141] libmachine: (ha-326651) DBG | unable to find host DHCP lease matching {name: "ha-326651", mac: "52:54:00:eb:7a:d3", ip: "192.168.39.220"} in network mk-ha-326651
	I0731 18:35:01.367167  413977 main.go:141] libmachine: (ha-326651) DBG | Getting to WaitForSSH function...
	I0731 18:35:01.367205  413977 main.go:141] libmachine: (ha-326651) Reserved static IP address: 192.168.39.220
	I0731 18:35:01.367234  413977 main.go:141] libmachine: (ha-326651) Waiting for SSH to be available...
	I0731 18:35:01.370021  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.370436  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.370469  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.370729  413977 main.go:141] libmachine: (ha-326651) DBG | Using SSH client type: external
	I0731 18:35:01.370754  413977 main.go:141] libmachine: (ha-326651) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa (-rw-------)
	I0731 18:35:01.370871  413977 main.go:141] libmachine: (ha-326651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:35:01.370902  413977 main.go:141] libmachine: (ha-326651) DBG | About to run SSH command:
	I0731 18:35:01.370916  413977 main.go:141] libmachine: (ha-326651) DBG | exit 0
	I0731 18:35:01.500352  413977 main.go:141] libmachine: (ha-326651) DBG | SSH cmd err, output: <nil>: 
	I0731 18:35:01.500653  413977 main.go:141] libmachine: (ha-326651) KVM machine creation complete!
	I0731 18:35:01.501052  413977 main.go:141] libmachine: (ha-326651) Calling .GetConfigRaw
	I0731 18:35:01.501680  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:01.501926  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:01.502099  413977 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:35:01.502116  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:01.503604  413977 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:35:01.503622  413977 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:35:01.503629  413977 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:35:01.503638  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.506124  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.506582  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.506611  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.506716  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.506897  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.507096  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.507234  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.507398  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.507653  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.507665  413977 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:35:01.611686  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:01.611713  413977 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:35:01.611721  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.614365  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.614780  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.614812  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.615001  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.615218  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.615364  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.615498  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.615680  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.615869  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.615882  413977 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:35:01.725461  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:35:01.725589  413977 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:35:01.725602  413977 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:35:01.725610  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:35:01.725916  413977 buildroot.go:166] provisioning hostname "ha-326651"
	I0731 18:35:01.725942  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:35:01.726128  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.729355  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.729674  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.729702  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.729898  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.730090  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.730270  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.730414  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.730596  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.730786  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.730802  413977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651 && echo "ha-326651" | sudo tee /etc/hostname
	I0731 18:35:01.851718  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651
	
	I0731 18:35:01.851743  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.855156  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.855427  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.855488  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.855698  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:01.856028  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.856221  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:01.856452  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:01.856652  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:01.856824  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:01.856840  413977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:35:01.970596  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:01.970634  413977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:35:01.970687  413977 buildroot.go:174] setting up certificates
	I0731 18:35:01.970698  413977 provision.go:84] configureAuth start
	I0731 18:35:01.970710  413977 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:35:01.971058  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:01.974089  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.974436  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.974466  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.974709  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:01.976967  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.977265  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:01.977285  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:01.977458  413977 provision.go:143] copyHostCerts
	I0731 18:35:01.977493  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:01.977532  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:35:01.977541  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:01.977609  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:35:01.977684  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:01.977709  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:35:01.977716  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:01.977740  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:35:01.977780  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:01.977805  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:35:01.977811  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:01.977837  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:35:01.977887  413977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651 san=[127.0.0.1 192.168.39.220 ha-326651 localhost minikube]
	I0731 18:35:02.430845  413977 provision.go:177] copyRemoteCerts
	I0731 18:35:02.430916  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:35:02.430944  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:02.434564  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.434904  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.434935  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.435091  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:02.435332  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.435498  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:02.435619  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:02.520471  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:35:02.520541  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:35:02.546707  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:35:02.546778  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 18:35:02.572855  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:35:02.572944  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:35:02.596466  413977 provision.go:87] duration metric: took 625.753635ms to configureAuth
	I0731 18:35:02.596499  413977 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:35:02.596755  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:02.596883  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:02.599585  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.599944  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.599974  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.600170  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:02.600371  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.600656  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.600812  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:02.601011  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:02.601178  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:02.601195  413977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:35:02.881432  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:35:02.881464  413977 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:35:02.881472  413977 main.go:141] libmachine: (ha-326651) Calling .GetURL
	I0731 18:35:02.882773  413977 main.go:141] libmachine: (ha-326651) DBG | Using libvirt version 6000000
	I0731 18:35:02.885020  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.885340  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.885370  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.885550  413977 main.go:141] libmachine: Docker is up and running!
	I0731 18:35:02.885564  413977 main.go:141] libmachine: Reticulating splines...
	I0731 18:35:02.885571  413977 client.go:171] duration metric: took 22.069652293s to LocalClient.Create
	I0731 18:35:02.885591  413977 start.go:167] duration metric: took 22.069706495s to libmachine.API.Create "ha-326651"
	I0731 18:35:02.885601  413977 start.go:293] postStartSetup for "ha-326651" (driver="kvm2")
	I0731 18:35:02.885610  413977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:35:02.885630  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:02.885895  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:35:02.885927  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:02.887911  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.888288  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:02.888312  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:02.888522  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:02.888758  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:02.888942  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:02.889173  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:02.971215  413977 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:35:02.975448  413977 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:35:02.975480  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:35:02.975561  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:35:02.975633  413977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:35:02.975644  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:35:02.975738  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:35:02.985721  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:35:03.009493  413977 start.go:296] duration metric: took 123.872449ms for postStartSetup
	I0731 18:35:03.009567  413977 main.go:141] libmachine: (ha-326651) Calling .GetConfigRaw
	I0731 18:35:03.010351  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:03.012910  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.013238  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.013270  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.013497  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:03.013674  413977 start.go:128] duration metric: took 22.216446388s to createHost
	I0731 18:35:03.013697  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:03.016116  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.016468  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.016500  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.016604  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:03.016796  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.016961  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.017101  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:03.017279  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:03.017448  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:35:03.017459  413977 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:35:03.125237  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722450903.098801256
	
	I0731 18:35:03.125270  413977 fix.go:216] guest clock: 1722450903.098801256
	I0731 18:35:03.125281  413977 fix.go:229] Guest: 2024-07-31 18:35:03.098801256 +0000 UTC Remote: 2024-07-31 18:35:03.013686749 +0000 UTC m=+22.326991001 (delta=85.114507ms)
	I0731 18:35:03.125331  413977 fix.go:200] guest clock delta is within tolerance: 85.114507ms
	I0731 18:35:03.125337  413977 start.go:83] releasing machines lock for "ha-326651", held for 22.328179384s
	I0731 18:35:03.125363  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.125651  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:03.128266  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.128568  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.128600  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.128767  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.129351  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.129506  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:03.129638  413977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:35:03.129693  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:03.129731  413977 ssh_runner.go:195] Run: cat /version.json
	I0731 18:35:03.129751  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:03.132297  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132549  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132650  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.132676  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132788  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:03.132898  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:03.132931  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:03.132973  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.133150  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:03.133150  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:03.133352  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:03.133338  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:03.133507  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:03.133652  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:03.209618  413977 ssh_runner.go:195] Run: systemctl --version
	I0731 18:35:03.235915  413977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:35:03.400504  413977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:35:03.406467  413977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:35:03.406541  413977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:35:03.424193  413977 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:35:03.424225  413977 start.go:495] detecting cgroup driver to use...
	I0731 18:35:03.424297  413977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:35:03.440499  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:35:03.455446  413977 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:35:03.455510  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:35:03.470288  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:35:03.485030  413977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:35:03.608058  413977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:35:03.748606  413977 docker.go:233] disabling docker service ...
	I0731 18:35:03.748688  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:35:03.763497  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:35:03.776956  413977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:35:03.912903  413977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:35:04.051609  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:35:04.065775  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:35:04.084878  413977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:35:04.084944  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.095985  413977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:35:04.096053  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.107308  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.118206  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.129146  413977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:35:04.140002  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.151131  413977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.169345  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:35:04.180308  413977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:35:04.189948  413977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:35:04.190016  413977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:35:04.203308  413977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:35:04.213074  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:35:04.339820  413977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:35:04.473005  413977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:35:04.473089  413977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:35:04.478202  413977 start.go:563] Will wait 60s for crictl version
	I0731 18:35:04.478277  413977 ssh_runner.go:195] Run: which crictl
	I0731 18:35:04.482124  413977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:35:04.521550  413977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:35:04.521644  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:35:04.550817  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:35:04.582275  413977 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:35:04.583668  413977 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:35:04.586549  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:04.586860  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:04.586886  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:04.587161  413977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:35:04.591521  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:35:04.605116  413977 kubeadm.go:883] updating cluster {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:35:04.605253  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:35:04.605299  413977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:35:04.635944  413977 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 18:35:04.636021  413977 ssh_runner.go:195] Run: which lz4
	I0731 18:35:04.639922  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 18:35:04.640026  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 18:35:04.644215  413977 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 18:35:04.644249  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 18:35:06.093375  413977 crio.go:462] duration metric: took 1.45338213s to copy over tarball
	I0731 18:35:06.093466  413977 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 18:35:08.282574  413977 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.189071604s)
	I0731 18:35:08.282615  413977 crio.go:469] duration metric: took 2.189201764s to extract the tarball
	I0731 18:35:08.282625  413977 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 18:35:08.320900  413977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:35:08.369264  413977 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:35:08.369292  413977 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:35:08.369300  413977 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.30.3 crio true true} ...
	I0731 18:35:08.369418  413977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:35:08.369484  413977 ssh_runner.go:195] Run: crio config
	I0731 18:35:08.412904  413977 cni.go:84] Creating CNI manager for ""
	I0731 18:35:08.412927  413977 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 18:35:08.412936  413977 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:35:08.412958  413977 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326651 NodeName:ha-326651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:35:08.413112  413977 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-326651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:35:08.413142  413977 kube-vip.go:115] generating kube-vip config ...
	I0731 18:35:08.413185  413977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:35:08.429915  413977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:35:08.430038  413977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:35:08.430115  413977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:35:08.440735  413977 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:35:08.440834  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 18:35:08.453582  413977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 18:35:08.471025  413977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:35:08.487472  413977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 18:35:08.503698  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 18:35:08.520358  413977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:35:08.524197  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:35:08.535926  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:35:08.662409  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:35:08.680546  413977 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.220
	I0731 18:35:08.680577  413977 certs.go:194] generating shared ca certs ...
	I0731 18:35:08.680599  413977 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.680776  413977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:35:08.680838  413977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:35:08.680852  413977 certs.go:256] generating profile certs ...
	I0731 18:35:08.680923  413977 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:35:08.680943  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt with IP's: []
	I0731 18:35:08.813901  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt ...
	I0731 18:35:08.813932  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt: {Name:mkbf29d30b87ac9344f189deb736c1c30a7f569f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.814140  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key ...
	I0731 18:35:08.814156  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key: {Name:mk1aeab75fd0a97151206c81270c992b7289ce8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.814259  413977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03
	I0731 18:35:08.814281  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.254]
	I0731 18:35:08.871436  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03 ...
	I0731 18:35:08.871467  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03: {Name:mk0deec4f68a942a46259c6f72337b1840b5b859 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.871656  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03 ...
	I0731 18:35:08.871680  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03: {Name:mk239c1e471661396ec00ed8f27be84a4272e488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:08.871776  413977 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cad66a03 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:35:08.871872  413977 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cad66a03 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:35:08.871965  413977 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:35:08.871986  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt with IP's: []
	I0731 18:35:09.107578  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt ...
	I0731 18:35:09.107620  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt: {Name:mkfc63cb0330ae66e4cefacb0c34de64236dfcfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:09.107856  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key ...
	I0731 18:35:09.107880  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key: {Name:mk87e0227c814176c96ddf4f3b22cd65cbfe3820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:09.107994  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:35:09.108023  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:35:09.108048  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:35:09.108071  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:35:09.108090  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:35:09.108111  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:35:09.108135  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:35:09.108157  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:35:09.108246  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:35:09.108301  413977 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:35:09.108326  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:35:09.108371  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:35:09.108436  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:35:09.108470  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:35:09.108550  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:35:09.108605  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.108630  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.108648  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.109342  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:35:09.135205  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:35:09.160752  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:35:09.186825  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:35:09.210864  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 18:35:09.235888  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:35:09.261260  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:35:09.285716  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:35:09.309506  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:35:09.333047  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:35:09.358243  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:35:09.383029  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:35:09.404779  413977 ssh_runner.go:195] Run: openssl version
	I0731 18:35:09.412050  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:35:09.422976  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.427487  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.427556  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:35:09.433425  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:35:09.447663  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:35:09.466955  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.473257  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.473326  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:35:09.479726  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:35:09.494415  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:35:09.511247  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.516731  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.516795  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:35:09.522679  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:35:09.534570  413977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:35:09.538933  413977 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:35:09.539000  413977 kubeadm.go:392] StartCluster: {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:35:09.539113  413977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:35:09.539186  413977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:35:09.580544  413977 cri.go:89] found id: ""
	I0731 18:35:09.580617  413977 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 18:35:09.591282  413977 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 18:35:09.601767  413977 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 18:35:09.612363  413977 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 18:35:09.612408  413977 kubeadm.go:157] found existing configuration files:
	
	I0731 18:35:09.612476  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 18:35:09.622972  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 18:35:09.623028  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 18:35:09.633010  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 18:35:09.644050  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 18:35:09.644194  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 18:35:09.655060  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 18:35:09.665410  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 18:35:09.665479  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 18:35:09.675773  413977 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 18:35:09.685542  413977 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 18:35:09.685616  413977 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 18:35:09.696050  413977 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 18:35:09.807439  413977 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 18:35:09.807520  413977 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 18:35:09.947531  413977 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 18:35:09.947663  413977 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 18:35:09.947822  413977 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 18:35:10.156890  413977 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 18:35:10.376045  413977 out.go:204]   - Generating certificates and keys ...
	I0731 18:35:10.376178  413977 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 18:35:10.376260  413977 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 18:35:10.376366  413977 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 18:35:10.667880  413977 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 18:35:10.788539  413977 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 18:35:10.999419  413977 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 18:35:11.412365  413977 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 18:35:11.412592  413977 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-326651 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0731 18:35:11.691430  413977 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 18:35:11.691686  413977 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-326651 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I0731 18:35:11.748202  413977 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 18:35:11.849775  413977 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 18:35:12.073145  413977 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 18:35:12.073280  413977 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 18:35:12.218887  413977 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 18:35:12.334397  413977 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 18:35:12.435537  413977 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 18:35:12.601773  413977 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 18:35:12.765403  413977 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 18:35:12.766022  413977 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 18:35:12.768970  413977 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 18:35:12.771208  413977 out.go:204]   - Booting up control plane ...
	I0731 18:35:12.771324  413977 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 18:35:12.771445  413977 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 18:35:12.771526  413977 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 18:35:12.788777  413977 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 18:35:12.788896  413977 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 18:35:12.788933  413977 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 18:35:12.947583  413977 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 18:35:12.947718  413977 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 18:35:13.449322  413977 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.280738ms
	I0731 18:35:13.449442  413977 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 18:35:19.542500  413977 kubeadm.go:310] [api-check] The API server is healthy after 6.096981603s
	I0731 18:35:19.556522  413977 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 18:35:19.572914  413977 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 18:35:19.597409  413977 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 18:35:19.597654  413977 kubeadm.go:310] [mark-control-plane] Marking the node ha-326651 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 18:35:19.609399  413977 kubeadm.go:310] [bootstrap-token] Using token: mjwpqc.cas5affjevm676c6
	I0731 18:35:19.610932  413977 out.go:204]   - Configuring RBAC rules ...
	I0731 18:35:19.611041  413977 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 18:35:19.616009  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 18:35:19.623171  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 18:35:19.626030  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 18:35:19.632949  413977 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 18:35:19.639026  413977 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 18:35:19.952644  413977 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 18:35:20.389795  413977 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 18:35:20.950692  413977 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 18:35:20.952871  413977 kubeadm.go:310] 
	I0731 18:35:20.952930  413977 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 18:35:20.952936  413977 kubeadm.go:310] 
	I0731 18:35:20.953046  413977 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 18:35:20.953066  413977 kubeadm.go:310] 
	I0731 18:35:20.953115  413977 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 18:35:20.953189  413977 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 18:35:20.953268  413977 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 18:35:20.953276  413977 kubeadm.go:310] 
	I0731 18:35:20.953319  413977 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 18:35:20.953337  413977 kubeadm.go:310] 
	I0731 18:35:20.953406  413977 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 18:35:20.953418  413977 kubeadm.go:310] 
	I0731 18:35:20.953489  413977 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 18:35:20.953608  413977 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 18:35:20.953719  413977 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 18:35:20.953729  413977 kubeadm.go:310] 
	I0731 18:35:20.953844  413977 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 18:35:20.953966  413977 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 18:35:20.953978  413977 kubeadm.go:310] 
	I0731 18:35:20.954100  413977 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mjwpqc.cas5affjevm676c6 \
	I0731 18:35:20.954199  413977 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd \
	I0731 18:35:20.954232  413977 kubeadm.go:310] 	--control-plane 
	I0731 18:35:20.954247  413977 kubeadm.go:310] 
	I0731 18:35:20.954353  413977 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 18:35:20.954363  413977 kubeadm.go:310] 
	I0731 18:35:20.954462  413977 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mjwpqc.cas5affjevm676c6 \
	I0731 18:35:20.954585  413977 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd 
	I0731 18:35:20.954917  413977 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 18:35:20.955067  413977 cni.go:84] Creating CNI manager for ""
	I0731 18:35:20.955084  413977 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 18:35:20.956822  413977 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 18:35:20.958071  413977 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 18:35:20.963420  413977 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 18:35:20.963436  413977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 18:35:20.982042  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 18:35:21.388053  413977 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 18:35:21.388123  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:21.388137  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326651 minikube.k8s.io/updated_at=2024_07_31T18_35_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=ha-326651 minikube.k8s.io/primary=true
	I0731 18:35:21.522149  413977 ops.go:34] apiserver oom_adj: -16
	I0731 18:35:21.522177  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:22.022868  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:22.523244  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:23.022569  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:23.522992  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:24.022962  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:24.522691  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:25.022840  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:25.522450  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:26.022963  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:26.523135  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:27.022447  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:27.522648  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:28.022393  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:28.522782  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:29.022914  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:29.522590  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:30.022599  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:30.522458  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:31.023101  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:31.522322  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:32.022812  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:32.523180  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:33.023246  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:33.522836  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:34.023049  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 18:35:34.193870  413977 kubeadm.go:1113] duration metric: took 12.805818515s to wait for elevateKubeSystemPrivileges
	I0731 18:35:34.193915  413977 kubeadm.go:394] duration metric: took 24.654920078s to StartCluster
	I0731 18:35:34.193941  413977 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:34.194037  413977 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:35:34.194906  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:35:34.195173  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 18:35:34.195233  413977 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:35:34.195269  413977 start.go:241] waiting for startup goroutines ...
	I0731 18:35:34.195279  413977 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 18:35:34.195343  413977 addons.go:69] Setting storage-provisioner=true in profile "ha-326651"
	I0731 18:35:34.195356  413977 addons.go:69] Setting default-storageclass=true in profile "ha-326651"
	I0731 18:35:34.195383  413977 addons.go:234] Setting addon storage-provisioner=true in "ha-326651"
	I0731 18:35:34.195391  413977 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-326651"
	I0731 18:35:34.195416  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:35:34.195479  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:34.195798  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.195824  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.195886  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.195924  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.211396  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40147
	I0731 18:35:34.211486  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I0731 18:35:34.211858  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.212018  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.212400  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.212423  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.212557  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.212580  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.212749  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.212937  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.213165  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:34.213310  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.213336  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.215291  413977 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:35:34.215524  413977 kapi.go:59] client config for ha-326651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 18:35:34.215965  413977 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 18:35:34.216093  413977 addons.go:234] Setting addon default-storageclass=true in "ha-326651"
	I0731 18:35:34.216145  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:35:34.216416  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.216456  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.229783  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35013
	I0731 18:35:34.230417  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.230979  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.231004  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.231357  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.231363  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44631
	I0731 18:35:34.231603  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:34.231750  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.232266  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.232293  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.232677  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.233298  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:34.233331  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:34.233510  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:34.235939  413977 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 18:35:34.237554  413977 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:35:34.237581  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 18:35:34.237606  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:34.240759  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.241209  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:34.241235  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.241401  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:34.241596  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:34.241765  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:34.241903  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:34.254394  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44309
	I0731 18:35:34.254910  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:34.255362  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:34.255385  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:34.255779  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:34.256046  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:35:34.257874  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:35:34.258158  413977 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 18:35:34.258179  413977 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 18:35:34.258202  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:35:34.261652  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.262186  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:35:34.262214  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:35:34.262389  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:35:34.262567  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:35:34.262734  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:35:34.262870  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:35:34.420261  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 18:35:34.503010  413977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 18:35:34.513515  413977 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 18:35:34.864751  413977 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 18:35:35.137522  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137552  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.137549  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137624  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.137853  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.137858  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.137869  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.137880  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.137884  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137889  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.137892  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.137897  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.138114  413977 main.go:141] libmachine: (ha-326651) DBG | Closing plugin on server side
	I0731 18:35:35.138127  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.138140  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.138148  413977 main.go:141] libmachine: (ha-326651) DBG | Closing plugin on server side
	I0731 18:35:35.138210  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.138246  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.138363  413977 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 18:35:35.138374  413977 round_trippers.go:469] Request Headers:
	I0731 18:35:35.138384  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:35:35.138390  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:35:35.153319  413977 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0731 18:35:35.153970  413977 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 18:35:35.153984  413977 round_trippers.go:469] Request Headers:
	I0731 18:35:35.153995  413977 round_trippers.go:473]     Content-Type: application/json
	I0731 18:35:35.154005  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:35:35.154012  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:35:35.157091  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:35:35.157373  413977 main.go:141] libmachine: Making call to close driver server
	I0731 18:35:35.157391  413977 main.go:141] libmachine: (ha-326651) Calling .Close
	I0731 18:35:35.157720  413977 main.go:141] libmachine: Successfully made call to close driver server
	I0731 18:35:35.157740  413977 main.go:141] libmachine: (ha-326651) DBG | Closing plugin on server side
	I0731 18:35:35.157747  413977 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 18:35:35.159573  413977 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 18:35:35.160843  413977 addons.go:510] duration metric: took 965.561242ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 18:35:35.160881  413977 start.go:246] waiting for cluster config update ...
	I0731 18:35:35.160896  413977 start.go:255] writing updated cluster config ...
	I0731 18:35:35.162705  413977 out.go:177] 
	I0731 18:35:35.164248  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:35.164331  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:35.165987  413977 out.go:177] * Starting "ha-326651-m02" control-plane node in "ha-326651" cluster
	I0731 18:35:35.167212  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:35:35.167231  413977 cache.go:56] Caching tarball of preloaded images
	I0731 18:35:35.167318  413977 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:35:35.167329  413977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:35:35.167389  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:35.167534  413977 start.go:360] acquireMachinesLock for ha-326651-m02: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:35:35.167575  413977 start.go:364] duration metric: took 22.182µs to acquireMachinesLock for "ha-326651-m02"
	I0731 18:35:35.167592  413977 start.go:93] Provisioning new machine with config: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:35:35.167663  413977 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 18:35:35.169251  413977 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:35:35.169333  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:35:35.169357  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:35:35.183815  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0731 18:35:35.184191  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:35:35.184697  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:35:35.184723  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:35:35.185029  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:35:35.185228  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:35.185369  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:35.185497  413977 start.go:159] libmachine.API.Create for "ha-326651" (driver="kvm2")
	I0731 18:35:35.185520  413977 client.go:168] LocalClient.Create starting
	I0731 18:35:35.185552  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:35:35.185590  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:35:35.185610  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:35:35.185681  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:35:35.185707  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:35:35.185723  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:35:35.185751  413977 main.go:141] libmachine: Running pre-create checks...
	I0731 18:35:35.185764  413977 main.go:141] libmachine: (ha-326651-m02) Calling .PreCreateCheck
	I0731 18:35:35.185933  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetConfigRaw
	I0731 18:35:35.186320  413977 main.go:141] libmachine: Creating machine...
	I0731 18:35:35.186336  413977 main.go:141] libmachine: (ha-326651-m02) Calling .Create
	I0731 18:35:35.186440  413977 main.go:141] libmachine: (ha-326651-m02) Creating KVM machine...
	I0731 18:35:35.187617  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found existing default KVM network
	I0731 18:35:35.187723  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found existing private KVM network mk-ha-326651
	I0731 18:35:35.187863  413977 main.go:141] libmachine: (ha-326651-m02) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02 ...
	I0731 18:35:35.187884  413977 main.go:141] libmachine: (ha-326651-m02) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:35:35.187968  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.187853  414340 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:35:35.188055  413977 main.go:141] libmachine: (ha-326651-m02) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:35:35.446325  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.446211  414340 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa...
	I0731 18:35:35.643707  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.643548  414340 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/ha-326651-m02.rawdisk...
	I0731 18:35:35.643751  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Writing magic tar header
	I0731 18:35:35.643768  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Writing SSH key tar header
	I0731 18:35:35.643782  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:35.643705  414340 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02 ...
	I0731 18:35:35.643867  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02
	I0731 18:35:35.643929  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:35:35.643945  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02 (perms=drwx------)
	I0731 18:35:35.643966  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:35:35.643978  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:35:35.643989  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:35:35.644003  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:35:35.644017  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:35:35.644030  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:35:35.644039  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:35:35.644054  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:35:35.644064  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Checking permissions on dir: /home
	I0731 18:35:35.644078  413977 main.go:141] libmachine: (ha-326651-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:35:35.644091  413977 main.go:141] libmachine: (ha-326651-m02) Creating domain...
	I0731 18:35:35.644107  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Skipping /home - not owner
	I0731 18:35:35.645029  413977 main.go:141] libmachine: (ha-326651-m02) define libvirt domain using xml: 
	I0731 18:35:35.645048  413977 main.go:141] libmachine: (ha-326651-m02) <domain type='kvm'>
	I0731 18:35:35.645056  413977 main.go:141] libmachine: (ha-326651-m02)   <name>ha-326651-m02</name>
	I0731 18:35:35.645066  413977 main.go:141] libmachine: (ha-326651-m02)   <memory unit='MiB'>2200</memory>
	I0731 18:35:35.645075  413977 main.go:141] libmachine: (ha-326651-m02)   <vcpu>2</vcpu>
	I0731 18:35:35.645082  413977 main.go:141] libmachine: (ha-326651-m02)   <features>
	I0731 18:35:35.645090  413977 main.go:141] libmachine: (ha-326651-m02)     <acpi/>
	I0731 18:35:35.645100  413977 main.go:141] libmachine: (ha-326651-m02)     <apic/>
	I0731 18:35:35.645107  413977 main.go:141] libmachine: (ha-326651-m02)     <pae/>
	I0731 18:35:35.645114  413977 main.go:141] libmachine: (ha-326651-m02)     
	I0731 18:35:35.645119  413977 main.go:141] libmachine: (ha-326651-m02)   </features>
	I0731 18:35:35.645126  413977 main.go:141] libmachine: (ha-326651-m02)   <cpu mode='host-passthrough'>
	I0731 18:35:35.645131  413977 main.go:141] libmachine: (ha-326651-m02)   
	I0731 18:35:35.645141  413977 main.go:141] libmachine: (ha-326651-m02)   </cpu>
	I0731 18:35:35.645189  413977 main.go:141] libmachine: (ha-326651-m02)   <os>
	I0731 18:35:35.645219  413977 main.go:141] libmachine: (ha-326651-m02)     <type>hvm</type>
	I0731 18:35:35.645228  413977 main.go:141] libmachine: (ha-326651-m02)     <boot dev='cdrom'/>
	I0731 18:35:35.645238  413977 main.go:141] libmachine: (ha-326651-m02)     <boot dev='hd'/>
	I0731 18:35:35.645245  413977 main.go:141] libmachine: (ha-326651-m02)     <bootmenu enable='no'/>
	I0731 18:35:35.645252  413977 main.go:141] libmachine: (ha-326651-m02)   </os>
	I0731 18:35:35.645260  413977 main.go:141] libmachine: (ha-326651-m02)   <devices>
	I0731 18:35:35.645267  413977 main.go:141] libmachine: (ha-326651-m02)     <disk type='file' device='cdrom'>
	I0731 18:35:35.645277  413977 main.go:141] libmachine: (ha-326651-m02)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/boot2docker.iso'/>
	I0731 18:35:35.645289  413977 main.go:141] libmachine: (ha-326651-m02)       <target dev='hdc' bus='scsi'/>
	I0731 18:35:35.645297  413977 main.go:141] libmachine: (ha-326651-m02)       <readonly/>
	I0731 18:35:35.645302  413977 main.go:141] libmachine: (ha-326651-m02)     </disk>
	I0731 18:35:35.645311  413977 main.go:141] libmachine: (ha-326651-m02)     <disk type='file' device='disk'>
	I0731 18:35:35.645317  413977 main.go:141] libmachine: (ha-326651-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:35:35.645326  413977 main.go:141] libmachine: (ha-326651-m02)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/ha-326651-m02.rawdisk'/>
	I0731 18:35:35.645332  413977 main.go:141] libmachine: (ha-326651-m02)       <target dev='hda' bus='virtio'/>
	I0731 18:35:35.645339  413977 main.go:141] libmachine: (ha-326651-m02)     </disk>
	I0731 18:35:35.645344  413977 main.go:141] libmachine: (ha-326651-m02)     <interface type='network'>
	I0731 18:35:35.645351  413977 main.go:141] libmachine: (ha-326651-m02)       <source network='mk-ha-326651'/>
	I0731 18:35:35.645356  413977 main.go:141] libmachine: (ha-326651-m02)       <model type='virtio'/>
	I0731 18:35:35.645362  413977 main.go:141] libmachine: (ha-326651-m02)     </interface>
	I0731 18:35:35.645367  413977 main.go:141] libmachine: (ha-326651-m02)     <interface type='network'>
	I0731 18:35:35.645390  413977 main.go:141] libmachine: (ha-326651-m02)       <source network='default'/>
	I0731 18:35:35.645418  413977 main.go:141] libmachine: (ha-326651-m02)       <model type='virtio'/>
	I0731 18:35:35.645438  413977 main.go:141] libmachine: (ha-326651-m02)     </interface>
	I0731 18:35:35.645454  413977 main.go:141] libmachine: (ha-326651-m02)     <serial type='pty'>
	I0731 18:35:35.645467  413977 main.go:141] libmachine: (ha-326651-m02)       <target port='0'/>
	I0731 18:35:35.645474  413977 main.go:141] libmachine: (ha-326651-m02)     </serial>
	I0731 18:35:35.645486  413977 main.go:141] libmachine: (ha-326651-m02)     <console type='pty'>
	I0731 18:35:35.645496  413977 main.go:141] libmachine: (ha-326651-m02)       <target type='serial' port='0'/>
	I0731 18:35:35.645502  413977 main.go:141] libmachine: (ha-326651-m02)     </console>
	I0731 18:35:35.645511  413977 main.go:141] libmachine: (ha-326651-m02)     <rng model='virtio'>
	I0731 18:35:35.645541  413977 main.go:141] libmachine: (ha-326651-m02)       <backend model='random'>/dev/random</backend>
	I0731 18:35:35.645565  413977 main.go:141] libmachine: (ha-326651-m02)     </rng>
	I0731 18:35:35.645575  413977 main.go:141] libmachine: (ha-326651-m02)     
	I0731 18:35:35.645584  413977 main.go:141] libmachine: (ha-326651-m02)     
	I0731 18:35:35.645594  413977 main.go:141] libmachine: (ha-326651-m02)   </devices>
	I0731 18:35:35.645604  413977 main.go:141] libmachine: (ha-326651-m02) </domain>
	I0731 18:35:35.645614  413977 main.go:141] libmachine: (ha-326651-m02) 
	I0731 18:35:35.652329  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:98:43:24 in network default
	I0731 18:35:35.652898  413977 main.go:141] libmachine: (ha-326651-m02) Ensuring networks are active...
	I0731 18:35:35.652925  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:35.653591  413977 main.go:141] libmachine: (ha-326651-m02) Ensuring network default is active
	I0731 18:35:35.653867  413977 main.go:141] libmachine: (ha-326651-m02) Ensuring network mk-ha-326651 is active
	I0731 18:35:35.654354  413977 main.go:141] libmachine: (ha-326651-m02) Getting domain xml...
	I0731 18:35:35.655121  413977 main.go:141] libmachine: (ha-326651-m02) Creating domain...
	I0731 18:35:36.861124  413977 main.go:141] libmachine: (ha-326651-m02) Waiting to get IP...
	I0731 18:35:36.862084  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:36.862504  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:36.862566  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:36.862493  414340 retry.go:31] will retry after 199.826809ms: waiting for machine to come up
	I0731 18:35:37.064345  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:37.064927  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:37.064967  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:37.064860  414340 retry.go:31] will retry after 236.948402ms: waiting for machine to come up
	I0731 18:35:37.303612  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:37.304140  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:37.304168  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:37.304071  414340 retry.go:31] will retry after 402.03658ms: waiting for machine to come up
	I0731 18:35:37.707311  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:37.707733  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:37.707761  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:37.707695  414340 retry.go:31] will retry after 569.979997ms: waiting for machine to come up
	I0731 18:35:38.279602  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:38.280082  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:38.280114  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:38.280026  414340 retry.go:31] will retry after 586.366279ms: waiting for machine to come up
	I0731 18:35:38.867792  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:38.868371  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:38.868424  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:38.868260  414340 retry.go:31] will retry after 687.200514ms: waiting for machine to come up
	I0731 18:35:39.557177  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:39.557574  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:39.557602  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:39.557525  414340 retry.go:31] will retry after 1.024789258s: waiting for machine to come up
	I0731 18:35:40.584078  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:40.584531  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:40.584563  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:40.584464  414340 retry.go:31] will retry after 1.404649213s: waiting for machine to come up
	I0731 18:35:41.991082  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:41.991564  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:41.991590  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:41.991535  414340 retry.go:31] will retry after 1.367302302s: waiting for machine to come up
	I0731 18:35:43.361034  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:43.361505  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:43.361538  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:43.361449  414340 retry.go:31] will retry after 1.67771358s: waiting for machine to come up
	I0731 18:35:45.041027  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:45.041462  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:45.041486  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:45.041412  414340 retry.go:31] will retry after 2.147309485s: waiting for machine to come up
	I0731 18:35:47.190621  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:47.191055  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:47.191083  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:47.191003  414340 retry.go:31] will retry after 3.358926024s: waiting for machine to come up
	I0731 18:35:50.551544  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:50.552176  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:50.552204  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:50.552107  414340 retry.go:31] will retry after 3.792833111s: waiting for machine to come up
	I0731 18:35:54.349209  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:54.349784  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find current IP address of domain ha-326651-m02 in network mk-ha-326651
	I0731 18:35:54.349812  413977 main.go:141] libmachine: (ha-326651-m02) DBG | I0731 18:35:54.349732  414340 retry.go:31] will retry after 3.445591127s: waiting for machine to come up
	I0731 18:35:57.797811  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.798304  413977 main.go:141] libmachine: (ha-326651-m02) Found IP for machine: 192.168.39.202
	I0731 18:35:57.798328  413977 main.go:141] libmachine: (ha-326651-m02) Reserving static IP address...
	I0731 18:35:57.798341  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has current primary IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.798792  413977 main.go:141] libmachine: (ha-326651-m02) DBG | unable to find host DHCP lease matching {name: "ha-326651-m02", mac: "52:54:00:d7:a8:57", ip: "192.168.39.202"} in network mk-ha-326651
	I0731 18:35:57.872962  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Getting to WaitForSSH function...
	I0731 18:35:57.872999  413977 main.go:141] libmachine: (ha-326651-m02) Reserved static IP address: 192.168.39.202
	I0731 18:35:57.873013  413977 main.go:141] libmachine: (ha-326651-m02) Waiting for SSH to be available...
	I0731 18:35:57.875745  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.876129  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:57.876160  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:57.876305  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Using SSH client type: external
	I0731 18:35:57.876339  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa (-rw-------)
	I0731 18:35:57.876437  413977 main.go:141] libmachine: (ha-326651-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:35:57.876464  413977 main.go:141] libmachine: (ha-326651-m02) DBG | About to run SSH command:
	I0731 18:35:57.876477  413977 main.go:141] libmachine: (ha-326651-m02) DBG | exit 0
	I0731 18:35:58.004566  413977 main.go:141] libmachine: (ha-326651-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 18:35:58.004814  413977 main.go:141] libmachine: (ha-326651-m02) KVM machine creation complete!
	I0731 18:35:58.005200  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetConfigRaw
	I0731 18:35:58.005758  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:58.005947  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:58.006104  413977 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:35:58.006123  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:35:58.007450  413977 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:35:58.007465  413977 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:35:58.007471  413977 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:35:58.007477  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.009887  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.010310  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.010341  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.010446  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.010629  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.010786  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.010929  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.011127  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.011396  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.011415  413977 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:35:58.120065  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:58.120100  413977 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:35:58.120111  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.123179  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.123572  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.123602  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.123756  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.123987  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.124130  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.124325  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.124510  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.124739  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.124755  413977 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:35:58.233257  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:35:58.233341  413977 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:35:58.233359  413977 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:35:58.233373  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:58.233688  413977 buildroot.go:166] provisioning hostname "ha-326651-m02"
	I0731 18:35:58.233723  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:58.234009  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.236712  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.237040  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.237074  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.237243  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.237437  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.237601  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.237758  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.237947  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.238160  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.238172  413977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651-m02 && echo "ha-326651-m02" | sudo tee /etc/hostname
	I0731 18:35:58.366608  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651-m02
	
	I0731 18:35:58.366642  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.369425  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.369745  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.369786  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.369940  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.370170  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.370387  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.370564  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.370744  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.370963  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.370988  413977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:35:58.489910  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:35:58.489944  413977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:35:58.489960  413977 buildroot.go:174] setting up certificates
	I0731 18:35:58.489970  413977 provision.go:84] configureAuth start
	I0731 18:35:58.489978  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetMachineName
	I0731 18:35:58.490280  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:35:58.492850  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.493212  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.493238  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.493369  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.495952  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.496350  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.496385  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.496553  413977 provision.go:143] copyHostCerts
	I0731 18:35:58.496584  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:58.496622  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:35:58.496635  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:35:58.496708  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:35:58.496805  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:58.496830  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:35:58.496840  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:35:58.496887  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:35:58.496954  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:58.496980  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:35:58.496990  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:35:58.497024  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:35:58.497091  413977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651-m02 san=[127.0.0.1 192.168.39.202 ha-326651-m02 localhost minikube]
	I0731 18:35:58.731508  413977 provision.go:177] copyRemoteCerts
	I0731 18:35:58.731583  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:35:58.731619  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.734088  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.734437  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.734464  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.734630  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.734911  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.735109  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.735260  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:58.819261  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:35:58.819352  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:35:58.844920  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:35:58.845002  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 18:35:58.868993  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:35:58.869083  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:35:58.893720  413977 provision.go:87] duration metric: took 403.735131ms to configureAuth
	I0731 18:35:58.893748  413977 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:35:58.893955  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:35:58.894049  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:58.896796  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.897200  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:58.897231  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:58.897376  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:58.897584  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.897747  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:58.897905  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:58.898067  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:58.898232  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:58.898247  413977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:35:59.184923  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:35:59.184951  413977 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:35:59.184960  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetURL
	I0731 18:35:59.186313  413977 main.go:141] libmachine: (ha-326651-m02) DBG | Using libvirt version 6000000
	I0731 18:35:59.188530  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.188801  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.188829  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.189018  413977 main.go:141] libmachine: Docker is up and running!
	I0731 18:35:59.189039  413977 main.go:141] libmachine: Reticulating splines...
	I0731 18:35:59.189047  413977 client.go:171] duration metric: took 24.003516515s to LocalClient.Create
	I0731 18:35:59.189072  413977 start.go:167] duration metric: took 24.003575545s to libmachine.API.Create "ha-326651"
	I0731 18:35:59.189085  413977 start.go:293] postStartSetup for "ha-326651-m02" (driver="kvm2")
	I0731 18:35:59.189102  413977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:35:59.189127  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.189397  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:35:59.189422  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:59.191929  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.192325  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.192356  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.192553  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.192777  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.192956  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.193139  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:59.280775  413977 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:35:59.285375  413977 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:35:59.285408  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:35:59.285476  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:35:59.285561  413977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:35:59.285572  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:35:59.285665  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:35:59.296957  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:35:59.322867  413977 start.go:296] duration metric: took 133.767315ms for postStartSetup
	I0731 18:35:59.322920  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetConfigRaw
	I0731 18:35:59.323524  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:35:59.326374  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.326710  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.326737  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.326972  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:35:59.327152  413977 start.go:128] duration metric: took 24.15947511s to createHost
	I0731 18:35:59.327176  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:59.329421  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.329811  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.329842  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.330004  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.330187  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.330355  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.330490  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.330677  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:35:59.330867  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0731 18:35:59.330880  413977 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:35:59.441112  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722450959.415688208
	
	I0731 18:35:59.441137  413977 fix.go:216] guest clock: 1722450959.415688208
	I0731 18:35:59.441147  413977 fix.go:229] Guest: 2024-07-31 18:35:59.415688208 +0000 UTC Remote: 2024-07-31 18:35:59.327163108 +0000 UTC m=+78.640467370 (delta=88.5251ms)
	I0731 18:35:59.441168  413977 fix.go:200] guest clock delta is within tolerance: 88.5251ms
	I0731 18:35:59.441175  413977 start.go:83] releasing machines lock for "ha-326651-m02", held for 24.273590624s
	I0731 18:35:59.441200  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.441487  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:35:59.444241  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.444718  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.444758  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.447114  413977 out.go:177] * Found network options:
	I0731 18:35:59.448560  413977 out.go:177]   - NO_PROXY=192.168.39.220
	W0731 18:35:59.449919  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:35:59.449954  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.450491  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.450707  413977 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:35:59.450803  413977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:35:59.450851  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	W0731 18:35:59.450874  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:35:59.450964  413977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:35:59.450991  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:35:59.453542  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.453650  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.453893  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.453933  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.453958  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:35:59.453971  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:35:59.454091  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.454192  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:35:59.454280  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.454383  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:35:59.454440  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.454524  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:35:59.454592  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:59.454665  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:35:59.694205  413977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:35:59.700544  413977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:35:59.700620  413977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:35:59.717245  413977 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:35:59.717278  413977 start.go:495] detecting cgroup driver to use...
	I0731 18:35:59.717354  413977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:35:59.739360  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:35:59.758132  413977 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:35:59.758199  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:35:59.781092  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:35:59.800140  413977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:35:59.929257  413977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:36:00.079862  413977 docker.go:233] disabling docker service ...
	I0731 18:36:00.079950  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:36:00.094037  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:36:00.106860  413977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:36:00.246092  413977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:36:00.384050  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:36:00.398412  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:36:00.418759  413977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:36:00.418830  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.429240  413977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:36:00.429313  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.440139  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.450612  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.461331  413977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:36:00.472071  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.482078  413977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.499048  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:36:00.509449  413977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:36:00.518961  413977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:36:00.519037  413977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:36:00.532607  413977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:36:00.542051  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:36:00.658660  413977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:36:00.797574  413977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:36:00.797659  413977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:36:00.802331  413977 start.go:563] Will wait 60s for crictl version
	I0731 18:36:00.802395  413977 ssh_runner.go:195] Run: which crictl
	I0731 18:36:00.806274  413977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:36:00.846409  413977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:36:00.846496  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:36:00.876400  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:36:00.906370  413977 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:36:00.908199  413977 out.go:177]   - env NO_PROXY=192.168.39.220
	I0731 18:36:00.909625  413977 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:36:00.912094  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:36:00.912420  413977 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:35:50 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:36:00.912442  413977 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:36:00.912633  413977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:36:00.916728  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:36:00.930599  413977 mustload.go:65] Loading cluster: ha-326651
	I0731 18:36:00.930859  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:36:00.931240  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:00.931282  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:00.946413  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42361
	I0731 18:36:00.946933  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:00.947482  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:00.947508  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:00.947828  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:00.948025  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:36:00.949643  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:36:00.950006  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:00.950032  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:00.965008  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I0731 18:36:00.965487  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:00.966001  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:00.966030  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:00.966343  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:00.966528  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:36:00.966740  413977 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.202
	I0731 18:36:00.966751  413977 certs.go:194] generating shared ca certs ...
	I0731 18:36:00.966767  413977 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:36:00.966890  413977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:36:00.966927  413977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:36:00.966937  413977 certs.go:256] generating profile certs ...
	I0731 18:36:00.967008  413977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:36:00.967033  413977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c
	I0731 18:36:00.967054  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.202 192.168.39.254]
	I0731 18:36:01.112495  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c ...
	I0731 18:36:01.112531  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c: {Name:mk8ceeb615d268d5b0f00c91b069a1a3723f2c04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:36:01.112733  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c ...
	I0731 18:36:01.112754  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c: {Name:mk00478113f238cc7eec245068b06cb5f757c59c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:36:01.112857  413977 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.1c9aea3c -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:36:01.113024  413977 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.1c9aea3c -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:36:01.113201  413977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:36:01.113219  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:36:01.113238  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:36:01.113253  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:36:01.113273  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:36:01.113289  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:36:01.113307  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:36:01.113325  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:36:01.113340  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:36:01.113413  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:36:01.113456  413977 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:36:01.113472  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:36:01.113504  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:36:01.113536  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:36:01.113568  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:36:01.113626  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:36:01.113664  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.113684  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.113703  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.113767  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:36:01.116870  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:01.117252  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:36:01.117275  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:01.117512  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:36:01.117750  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:36:01.117916  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:36:01.118056  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:36:01.196792  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 18:36:01.202328  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 18:36:01.213577  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 18:36:01.218105  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 18:36:01.229389  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 18:36:01.234401  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 18:36:01.246446  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 18:36:01.250856  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 18:36:01.262748  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 18:36:01.267259  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 18:36:01.278635  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 18:36:01.283174  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 18:36:01.294090  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:36:01.319298  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:36:01.342683  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:36:01.367373  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:36:01.391756  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 18:36:01.415690  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:36:01.438350  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:36:01.463256  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:36:01.487181  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:36:01.513206  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:36:01.537436  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:36:01.561619  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 18:36:01.579854  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 18:36:01.596492  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 18:36:01.613001  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 18:36:01.629229  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 18:36:01.646433  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 18:36:01.664055  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 18:36:01.681455  413977 ssh_runner.go:195] Run: openssl version
	I0731 18:36:01.687543  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:36:01.698742  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.703087  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.703156  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:36:01.708912  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:36:01.720079  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:36:01.731172  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.735518  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.735578  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:36:01.741589  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:36:01.752893  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:36:01.764993  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.770017  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.770088  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:36:01.776371  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:36:01.788522  413977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:36:01.792612  413977 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:36:01.792669  413977 kubeadm.go:934] updating node {m02 192.168.39.202 8443 v1.30.3 crio true true} ...
	I0731 18:36:01.792767  413977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:36:01.792797  413977 kube-vip.go:115] generating kube-vip config ...
	I0731 18:36:01.792841  413977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:36:01.810740  413977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:36:01.810808  413977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:36:01.810870  413977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:36:01.821431  413977 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 18:36:01.821493  413977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 18:36:01.832107  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 18:36:01.832141  413977 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 18:36:01.832142  413977 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 18:36:01.832149  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:36:01.832340  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:36:01.837253  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 18:36:01.837285  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 18:36:03.944533  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:36:03.960686  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:36:03.960798  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:36:03.965639  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 18:36:03.965681  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 18:36:09.311238  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:36:09.311325  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:36:09.316312  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 18:36:09.316363  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 18:36:09.571272  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 18:36:09.581469  413977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 18:36:09.598804  413977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:36:09.616310  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 18:36:09.633298  413977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:36:09.637615  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:36:09.650955  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:36:09.786501  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:36:09.808147  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:36:09.808597  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:09.808644  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:09.824421  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0731 18:36:09.824979  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:09.825520  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:09.825545  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:09.825893  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:09.826077  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:36:09.826224  413977 start.go:317] joinCluster: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:36:09.826321  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 18:36:09.826338  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:36:09.829547  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:09.830199  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:36:09.830222  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:36:09.830427  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:36:09.830693  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:36:09.830848  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:36:09.831020  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:36:10.001149  413977 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:36:10.001192  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cb1zae.ffq2me10k33ld2gl --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0731 18:36:31.947814  413977 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token cb1zae.ffq2me10k33ld2gl --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m02 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (21.946589058s)
	I0731 18:36:31.947859  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 18:36:32.512036  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326651-m02 minikube.k8s.io/updated_at=2024_07_31T18_36_32_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=ha-326651 minikube.k8s.io/primary=false
	I0731 18:36:32.648147  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326651-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 18:36:32.811459  413977 start.go:319] duration metric: took 22.985226172s to joinCluster
	I0731 18:36:32.811551  413977 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:36:32.811905  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:36:32.812972  413977 out.go:177] * Verifying Kubernetes components...
	I0731 18:36:32.814591  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:36:33.044219  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:36:33.117979  413977 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:36:33.118326  413977 kapi.go:59] client config for ha-326651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 18:36:33.118396  413977 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I0731 18:36:33.118595  413977 node_ready.go:35] waiting up to 6m0s for node "ha-326651-m02" to be "Ready" ...
	I0731 18:36:33.118684  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:33.118692  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:33.118700  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:33.118705  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:33.135614  413977 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0731 18:36:33.619795  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:33.619826  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:33.619837  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:33.619844  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:33.641399  413977 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0731 18:36:34.119252  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:34.119276  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:34.119286  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:34.119293  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:34.123528  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:34.619820  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:34.619853  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:34.619864  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:34.619879  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:34.627714  413977 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 18:36:35.118866  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:35.118893  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:35.118905  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:35.118909  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:35.122701  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:35.123182  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:35.619811  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:35.619833  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:35.619842  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:35.619846  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:35.623151  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:36.119732  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:36.119753  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:36.119762  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:36.119766  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:36.123530  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:36.619092  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:36.619125  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:36.619136  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:36.619140  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:36.622490  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:37.119642  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:37.119665  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:37.119673  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:37.119677  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:37.122991  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:37.123835  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:37.619251  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:37.619283  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:37.619292  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:37.619296  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:37.623460  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:38.119708  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:38.119736  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:38.119745  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:38.119749  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:38.123105  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:38.619250  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:38.619275  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:38.619284  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:38.619288  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:38.622772  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:39.118886  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:39.118911  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:39.118920  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:39.118924  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:39.122321  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:39.619186  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:39.619210  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:39.619219  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:39.619222  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:39.622507  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:39.623126  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:40.119023  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:40.119051  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:40.119064  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:40.119069  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:40.122756  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:40.619118  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:40.619146  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:40.619155  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:40.619160  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:40.622901  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:41.118817  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:41.118841  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:41.118851  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:41.118855  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:41.121963  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:41.619709  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:41.619734  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:41.619742  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:41.619747  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:41.623720  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:41.624433  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:42.119582  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:42.119612  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:42.119622  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:42.119628  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:42.122500  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:42.619202  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:42.619235  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:42.619248  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:42.619253  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:42.622947  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:43.119214  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:43.119237  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:43.119246  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:43.119249  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:43.124276  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:36:43.619483  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:43.619508  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:43.619517  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:43.619521  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:43.623921  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:44.119432  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:44.119458  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:44.119466  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:44.119470  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:44.123304  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:44.124032  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:44.619529  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:44.619553  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:44.619562  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:44.619567  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:44.623067  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:45.119620  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:45.119646  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:45.119654  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:45.119657  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:45.123207  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:45.618912  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:45.618938  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:45.618947  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:45.618951  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:45.622718  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:46.119263  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:46.119298  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:46.119308  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:46.119313  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:46.123083  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:46.619066  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:46.619091  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:46.619100  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:46.619104  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:46.622201  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:46.623139  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:47.119393  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:47.119424  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:47.119433  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:47.119439  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:47.123007  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:47.618994  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:47.619017  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:47.619026  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:47.619030  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:47.622368  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:48.119257  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:48.119279  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:48.119288  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:48.119293  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:48.123310  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:48.619282  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:48.619309  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:48.619318  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:48.619322  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:48.622987  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:48.623591  413977 node_ready.go:53] node "ha-326651-m02" has status "Ready":"False"
	I0731 18:36:49.118960  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:49.118988  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:49.118998  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:49.119003  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:49.122405  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:49.619280  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:49.619305  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:49.619312  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:49.619317  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:49.622791  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.119104  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:50.119129  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.119138  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.119142  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.122870  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.619224  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:50.619247  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.619255  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.619258  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.622725  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.623311  413977 node_ready.go:49] node "ha-326651-m02" has status "Ready":"True"
	I0731 18:36:50.623351  413977 node_ready.go:38] duration metric: took 17.504731047s for node "ha-326651-m02" to be "Ready" ...
	I0731 18:36:50.623363  413977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:36:50.623483  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:50.623496  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.623507  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.623517  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.628686  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:36:50.635769  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.635863  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hsr7k
	I0731 18:36:50.635871  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.635879  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.635886  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.639019  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.639721  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:50.639741  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.639752  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.639759  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.642992  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.643579  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.643601  413977 pod_ready.go:81] duration metric: took 7.805024ms for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.643611  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.643669  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p2tfn
	I0731 18:36:50.643676  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.643683  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.643688  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.647444  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.648594  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:50.648607  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.648615  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.648621  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.651381  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:50.652052  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.652081  413977 pod_ready.go:81] duration metric: took 8.461392ms for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.652094  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.652183  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651
	I0731 18:36:50.652195  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.652203  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.652207  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.655850  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.656990  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:50.657006  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.657014  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.657019  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.659586  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:50.660121  413977 pod_ready.go:92] pod "etcd-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.660142  413977 pod_ready.go:81] duration metric: took 8.037093ms for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.660158  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.660218  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m02
	I0731 18:36:50.660226  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.660233  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.660237  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.663068  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:50.663711  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:50.663728  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.663736  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.663739  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.666777  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:50.667644  413977 pod_ready.go:92] pod "etcd-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:50.667666  413977 pod_ready.go:81] duration metric: took 7.501047ms for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.667684  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:50.820032  413977 request.go:629] Waited for 152.267535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:36:50.820136  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:36:50.820146  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:50.820156  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:50.820170  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:50.823487  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:51.019288  413977 request.go:629] Waited for 195.11581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.019343  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.019349  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.019359  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.019365  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.021971  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:51.022498  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:51.022513  413977 pod_ready.go:81] duration metric: took 354.821451ms for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.022523  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.219661  413977 request.go:629] Waited for 197.049789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:36:51.219725  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:36:51.219730  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.219737  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.219742  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.235111  413977 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 18:36:51.419933  413977 request.go:629] Waited for 183.368029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:51.419996  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:51.420001  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.420009  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.420013  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.423405  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:51.423972  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:51.423990  413977 pod_ready.go:81] duration metric: took 401.460167ms for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.424000  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.620072  413977 request.go:629] Waited for 195.990243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:36:51.620141  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:36:51.620146  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.620154  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.620158  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.623549  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:51.819910  413977 request.go:629] Waited for 195.384302ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.819977  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:51.819983  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:51.819994  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:51.819999  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:51.822872  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:51.823601  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:51.823624  413977 pod_ready.go:81] duration metric: took 399.617251ms for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:51.823638  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.019634  413977 request.go:629] Waited for 195.919417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:36:52.019705  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:36:52.019710  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.019719  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.019724  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.023947  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:52.220260  413977 request.go:629] Waited for 195.397684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:52.220320  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:52.220325  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.220332  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.220336  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.223732  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:52.224369  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:52.224409  413977 pod_ready.go:81] duration metric: took 400.763328ms for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.224422  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.419481  413977 request.go:629] Waited for 194.973874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:36:52.419570  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:36:52.419577  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.419585  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.419589  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.423022  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:52.620098  413977 request.go:629] Waited for 196.372792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:52.620260  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:52.620275  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.620286  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.620295  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.623563  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:52.624517  413977 pod_ready.go:92] pod "kube-proxy-hg6sj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:52.624546  413977 pod_ready.go:81] duration metric: took 400.111099ms for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.624562  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:52.819687  413977 request.go:629] Waited for 195.030621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:36:52.819748  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:36:52.819754  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:52.819762  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:52.819765  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:52.823011  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.020234  413977 request.go:629] Waited for 196.391359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.020315  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.020322  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.020334  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.020344  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.023335  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:36:53.023942  413977 pod_ready.go:92] pod "kube-proxy-stqb2" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:53.023963  413977 pod_ready.go:81] duration metric: took 399.393046ms for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.023975  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.220035  413977 request.go:629] Waited for 195.975129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:36:53.220116  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:36:53.220131  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.220139  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.220146  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.224063  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.420226  413977 request.go:629] Waited for 195.373634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:53.420283  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:36:53.420289  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.420297  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.420302  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.423579  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.424326  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:53.424348  413977 pod_ready.go:81] duration metric: took 400.362187ms for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.424357  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.619262  413977 request.go:629] Waited for 194.802916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:36:53.619362  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:36:53.619369  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.619387  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.619398  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.623028  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.820000  413977 request.go:629] Waited for 196.367475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.820090  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:36:53.820096  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.820104  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.820108  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.823825  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:53.824352  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:36:53.824385  413977 pod_ready.go:81] duration metric: took 400.009008ms for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:36:53.824401  413977 pod_ready.go:38] duration metric: took 3.200992959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:36:53.824433  413977 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:36:53.824502  413977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:36:53.840295  413977 api_server.go:72] duration metric: took 21.028699297s to wait for apiserver process to appear ...
	I0731 18:36:53.840323  413977 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:36:53.840346  413977 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I0731 18:36:53.846270  413977 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I0731 18:36:53.846362  413977 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I0731 18:36:53.846378  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:53.846390  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:53.846401  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:53.847375  413977 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 18:36:53.847469  413977 api_server.go:141] control plane version: v1.30.3
	I0731 18:36:53.847486  413977 api_server.go:131] duration metric: took 7.156659ms to wait for apiserver health ...
	I0731 18:36:53.847493  413977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:36:54.019755  413977 request.go:629] Waited for 172.188734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.019895  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.019922  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.019934  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.019941  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.024788  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:36:54.029445  413977 system_pods.go:59] 17 kube-system pods found
	I0731 18:36:54.029486  413977 system_pods.go:61] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:36:54.029492  413977 system_pods.go:61] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:36:54.029496  413977 system_pods.go:61] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:36:54.029499  413977 system_pods.go:61] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:36:54.029502  413977 system_pods.go:61] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:36:54.029505  413977 system_pods.go:61] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:36:54.029508  413977 system_pods.go:61] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:36:54.029511  413977 system_pods.go:61] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:36:54.029515  413977 system_pods.go:61] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:36:54.029519  413977 system_pods.go:61] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:36:54.029524  413977 system_pods.go:61] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:36:54.029527  413977 system_pods.go:61] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:36:54.029530  413977 system_pods.go:61] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:36:54.029533  413977 system_pods.go:61] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:36:54.029536  413977 system_pods.go:61] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:36:54.029539  413977 system_pods.go:61] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:36:54.029542  413977 system_pods.go:61] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:36:54.029549  413977 system_pods.go:74] duration metric: took 182.050143ms to wait for pod list to return data ...
	I0731 18:36:54.029561  413977 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:36:54.219768  413977 request.go:629] Waited for 190.124299ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:36:54.219879  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:36:54.219891  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.219903  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.219912  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.222946  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:54.223161  413977 default_sa.go:45] found service account: "default"
	I0731 18:36:54.223176  413977 default_sa.go:55] duration metric: took 193.609173ms for default service account to be created ...
	I0731 18:36:54.223184  413977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:36:54.419651  413977 request.go:629] Waited for 196.386127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.419720  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:36:54.419725  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.419733  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.419736  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.424775  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:36:54.430823  413977 system_pods.go:86] 17 kube-system pods found
	I0731 18:36:54.430854  413977 system_pods.go:89] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:36:54.430864  413977 system_pods.go:89] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:36:54.430870  413977 system_pods.go:89] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:36:54.430875  413977 system_pods.go:89] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:36:54.430882  413977 system_pods.go:89] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:36:54.430889  413977 system_pods.go:89] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:36:54.430895  413977 system_pods.go:89] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:36:54.430902  413977 system_pods.go:89] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:36:54.430912  413977 system_pods.go:89] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:36:54.430919  413977 system_pods.go:89] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:36:54.430930  413977 system_pods.go:89] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:36:54.430937  413977 system_pods.go:89] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:36:54.430944  413977 system_pods.go:89] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:36:54.430953  413977 system_pods.go:89] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:36:54.430961  413977 system_pods.go:89] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:36:54.430969  413977 system_pods.go:89] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:36:54.430975  413977 system_pods.go:89] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:36:54.430988  413977 system_pods.go:126] duration metric: took 207.796783ms to wait for k8s-apps to be running ...
	I0731 18:36:54.431001  413977 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:36:54.431058  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:36:54.446884  413977 system_svc.go:56] duration metric: took 15.869691ms WaitForService to wait for kubelet
	I0731 18:36:54.446917  413977 kubeadm.go:582] duration metric: took 21.635330045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:36:54.446939  413977 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:36:54.619227  413977 request.go:629] Waited for 172.209982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I0731 18:36:54.619294  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I0731 18:36:54.619300  413977 round_trippers.go:469] Request Headers:
	I0731 18:36:54.619308  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:36:54.619313  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:36:54.622925  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:36:54.623751  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:36:54.623791  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:36:54.623816  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:36:54.623820  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:36:54.623826  413977 node_conditions.go:105] duration metric: took 176.882629ms to run NodePressure ...
	I0731 18:36:54.623838  413977 start.go:241] waiting for startup goroutines ...
	I0731 18:36:54.623868  413977 start.go:255] writing updated cluster config ...
	I0731 18:36:54.626219  413977 out.go:177] 
	I0731 18:36:54.628763  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:36:54.628859  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:36:54.630660  413977 out.go:177] * Starting "ha-326651-m03" control-plane node in "ha-326651" cluster
	I0731 18:36:54.632068  413977 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:36:54.632100  413977 cache.go:56] Caching tarball of preloaded images
	I0731 18:36:54.632225  413977 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:36:54.632240  413977 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:36:54.632350  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:36:54.632563  413977 start.go:360] acquireMachinesLock for ha-326651-m03: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:36:54.632610  413977 start.go:364] duration metric: took 26.59µs to acquireMachinesLock for "ha-326651-m03"
	I0731 18:36:54.632626  413977 start.go:93] Provisioning new machine with config: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:36:54.632717  413977 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 18:36:54.634360  413977 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 18:36:54.634443  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:36:54.634479  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:36:54.649865  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0731 18:36:54.650366  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:36:54.650792  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:36:54.650814  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:36:54.651168  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:36:54.651420  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:36:54.651573  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:36:54.651738  413977 start.go:159] libmachine.API.Create for "ha-326651" (driver="kvm2")
	I0731 18:36:54.651774  413977 client.go:168] LocalClient.Create starting
	I0731 18:36:54.651806  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 18:36:54.651838  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:36:54.651856  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:36:54.651908  413977 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 18:36:54.651928  413977 main.go:141] libmachine: Decoding PEM data...
	I0731 18:36:54.651939  413977 main.go:141] libmachine: Parsing certificate...
	I0731 18:36:54.651958  413977 main.go:141] libmachine: Running pre-create checks...
	I0731 18:36:54.651966  413977 main.go:141] libmachine: (ha-326651-m03) Calling .PreCreateCheck
	I0731 18:36:54.652128  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetConfigRaw
	I0731 18:36:54.652579  413977 main.go:141] libmachine: Creating machine...
	I0731 18:36:54.652596  413977 main.go:141] libmachine: (ha-326651-m03) Calling .Create
	I0731 18:36:54.652732  413977 main.go:141] libmachine: (ha-326651-m03) Creating KVM machine...
	I0731 18:36:54.653878  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found existing default KVM network
	I0731 18:36:54.654014  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found existing private KVM network mk-ha-326651
	I0731 18:36:54.654182  413977 main.go:141] libmachine: (ha-326651-m03) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03 ...
	I0731 18:36:54.654219  413977 main.go:141] libmachine: (ha-326651-m03) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:36:54.654321  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:54.654194  414751 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:36:54.654415  413977 main.go:141] libmachine: (ha-326651-m03) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 18:36:54.925445  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:54.925298  414751 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa...
	I0731 18:36:55.032632  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:55.032498  414751 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/ha-326651-m03.rawdisk...
	I0731 18:36:55.032662  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Writing magic tar header
	I0731 18:36:55.032677  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Writing SSH key tar header
	I0731 18:36:55.032691  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:55.032610  414751 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03 ...
	I0731 18:36:55.032713  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03
	I0731 18:36:55.032827  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 18:36:55.032857  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03 (perms=drwx------)
	I0731 18:36:55.032868  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:36:55.032900  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 18:36:55.032933  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 18:36:55.032945  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 18:36:55.032963  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 18:36:55.032972  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 18:36:55.033009  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 18:36:55.033032  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 18:36:55.033044  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Checking permissions on dir: /home
	I0731 18:36:55.033059  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Skipping /home - not owner
	I0731 18:36:55.033075  413977 main.go:141] libmachine: (ha-326651-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 18:36:55.033090  413977 main.go:141] libmachine: (ha-326651-m03) Creating domain...
	I0731 18:36:55.033848  413977 main.go:141] libmachine: (ha-326651-m03) define libvirt domain using xml: 
	I0731 18:36:55.033877  413977 main.go:141] libmachine: (ha-326651-m03) <domain type='kvm'>
	I0731 18:36:55.033889  413977 main.go:141] libmachine: (ha-326651-m03)   <name>ha-326651-m03</name>
	I0731 18:36:55.033896  413977 main.go:141] libmachine: (ha-326651-m03)   <memory unit='MiB'>2200</memory>
	I0731 18:36:55.033910  413977 main.go:141] libmachine: (ha-326651-m03)   <vcpu>2</vcpu>
	I0731 18:36:55.033920  413977 main.go:141] libmachine: (ha-326651-m03)   <features>
	I0731 18:36:55.033930  413977 main.go:141] libmachine: (ha-326651-m03)     <acpi/>
	I0731 18:36:55.033940  413977 main.go:141] libmachine: (ha-326651-m03)     <apic/>
	I0731 18:36:55.033951  413977 main.go:141] libmachine: (ha-326651-m03)     <pae/>
	I0731 18:36:55.033957  413977 main.go:141] libmachine: (ha-326651-m03)     
	I0731 18:36:55.033967  413977 main.go:141] libmachine: (ha-326651-m03)   </features>
	I0731 18:36:55.033978  413977 main.go:141] libmachine: (ha-326651-m03)   <cpu mode='host-passthrough'>
	I0731 18:36:55.033988  413977 main.go:141] libmachine: (ha-326651-m03)   
	I0731 18:36:55.033998  413977 main.go:141] libmachine: (ha-326651-m03)   </cpu>
	I0731 18:36:55.034009  413977 main.go:141] libmachine: (ha-326651-m03)   <os>
	I0731 18:36:55.034019  413977 main.go:141] libmachine: (ha-326651-m03)     <type>hvm</type>
	I0731 18:36:55.034030  413977 main.go:141] libmachine: (ha-326651-m03)     <boot dev='cdrom'/>
	I0731 18:36:55.034041  413977 main.go:141] libmachine: (ha-326651-m03)     <boot dev='hd'/>
	I0731 18:36:55.034056  413977 main.go:141] libmachine: (ha-326651-m03)     <bootmenu enable='no'/>
	I0731 18:36:55.034069  413977 main.go:141] libmachine: (ha-326651-m03)   </os>
	I0731 18:36:55.034080  413977 main.go:141] libmachine: (ha-326651-m03)   <devices>
	I0731 18:36:55.034091  413977 main.go:141] libmachine: (ha-326651-m03)     <disk type='file' device='cdrom'>
	I0731 18:36:55.034104  413977 main.go:141] libmachine: (ha-326651-m03)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/boot2docker.iso'/>
	I0731 18:36:55.034116  413977 main.go:141] libmachine: (ha-326651-m03)       <target dev='hdc' bus='scsi'/>
	I0731 18:36:55.034127  413977 main.go:141] libmachine: (ha-326651-m03)       <readonly/>
	I0731 18:36:55.034136  413977 main.go:141] libmachine: (ha-326651-m03)     </disk>
	I0731 18:36:55.034169  413977 main.go:141] libmachine: (ha-326651-m03)     <disk type='file' device='disk'>
	I0731 18:36:55.034192  413977 main.go:141] libmachine: (ha-326651-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 18:36:55.034207  413977 main.go:141] libmachine: (ha-326651-m03)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/ha-326651-m03.rawdisk'/>
	I0731 18:36:55.034219  413977 main.go:141] libmachine: (ha-326651-m03)       <target dev='hda' bus='virtio'/>
	I0731 18:36:55.034233  413977 main.go:141] libmachine: (ha-326651-m03)     </disk>
	I0731 18:36:55.034244  413977 main.go:141] libmachine: (ha-326651-m03)     <interface type='network'>
	I0731 18:36:55.034255  413977 main.go:141] libmachine: (ha-326651-m03)       <source network='mk-ha-326651'/>
	I0731 18:36:55.034271  413977 main.go:141] libmachine: (ha-326651-m03)       <model type='virtio'/>
	I0731 18:36:55.034282  413977 main.go:141] libmachine: (ha-326651-m03)     </interface>
	I0731 18:36:55.034292  413977 main.go:141] libmachine: (ha-326651-m03)     <interface type='network'>
	I0731 18:36:55.034309  413977 main.go:141] libmachine: (ha-326651-m03)       <source network='default'/>
	I0731 18:36:55.034320  413977 main.go:141] libmachine: (ha-326651-m03)       <model type='virtio'/>
	I0731 18:36:55.034330  413977 main.go:141] libmachine: (ha-326651-m03)     </interface>
	I0731 18:36:55.034340  413977 main.go:141] libmachine: (ha-326651-m03)     <serial type='pty'>
	I0731 18:36:55.034367  413977 main.go:141] libmachine: (ha-326651-m03)       <target port='0'/>
	I0731 18:36:55.034390  413977 main.go:141] libmachine: (ha-326651-m03)     </serial>
	I0731 18:36:55.034403  413977 main.go:141] libmachine: (ha-326651-m03)     <console type='pty'>
	I0731 18:36:55.034419  413977 main.go:141] libmachine: (ha-326651-m03)       <target type='serial' port='0'/>
	I0731 18:36:55.034431  413977 main.go:141] libmachine: (ha-326651-m03)     </console>
	I0731 18:36:55.034441  413977 main.go:141] libmachine: (ha-326651-m03)     <rng model='virtio'>
	I0731 18:36:55.034452  413977 main.go:141] libmachine: (ha-326651-m03)       <backend model='random'>/dev/random</backend>
	I0731 18:36:55.034459  413977 main.go:141] libmachine: (ha-326651-m03)     </rng>
	I0731 18:36:55.034466  413977 main.go:141] libmachine: (ha-326651-m03)     
	I0731 18:36:55.034475  413977 main.go:141] libmachine: (ha-326651-m03)     
	I0731 18:36:55.034485  413977 main.go:141] libmachine: (ha-326651-m03)   </devices>
	I0731 18:36:55.034498  413977 main.go:141] libmachine: (ha-326651-m03) </domain>
	I0731 18:36:55.034512  413977 main.go:141] libmachine: (ha-326651-m03) 
	I0731 18:36:55.041422  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:49:47:41 in network default
	I0731 18:36:55.041954  413977 main.go:141] libmachine: (ha-326651-m03) Ensuring networks are active...
	I0731 18:36:55.041977  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:55.042594  413977 main.go:141] libmachine: (ha-326651-m03) Ensuring network default is active
	I0731 18:36:55.042817  413977 main.go:141] libmachine: (ha-326651-m03) Ensuring network mk-ha-326651 is active
	I0731 18:36:55.043176  413977 main.go:141] libmachine: (ha-326651-m03) Getting domain xml...
	I0731 18:36:55.043920  413977 main.go:141] libmachine: (ha-326651-m03) Creating domain...
	I0731 18:36:56.284446  413977 main.go:141] libmachine: (ha-326651-m03) Waiting to get IP...
	I0731 18:36:56.285331  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:56.285792  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:56.285843  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:56.285788  414751 retry.go:31] will retry after 304.751946ms: waiting for machine to come up
	I0731 18:36:56.592337  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:56.592775  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:56.592803  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:56.592717  414751 retry.go:31] will retry after 340.274018ms: waiting for machine to come up
	I0731 18:36:56.934275  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:56.934639  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:56.934664  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:56.934590  414751 retry.go:31] will retry after 480.912288ms: waiting for machine to come up
	I0731 18:36:57.417185  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:57.417546  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:57.417569  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:57.417515  414751 retry.go:31] will retry after 559.822127ms: waiting for machine to come up
	I0731 18:36:57.978965  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:57.979412  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:57.979445  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:57.979342  414751 retry.go:31] will retry after 661.136496ms: waiting for machine to come up
	I0731 18:36:58.641741  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:58.642127  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:58.642145  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:58.642067  414751 retry.go:31] will retry after 868.945905ms: waiting for machine to come up
	I0731 18:36:59.512206  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:36:59.512689  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:36:59.512728  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:36:59.512626  414751 retry.go:31] will retry after 989.429958ms: waiting for machine to come up
	I0731 18:37:00.504321  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:00.504690  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:00.504722  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:00.504638  414751 retry.go:31] will retry after 1.406836695s: waiting for machine to come up
	I0731 18:37:01.912991  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:01.913456  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:01.913484  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:01.913423  414751 retry.go:31] will retry after 1.15357756s: waiting for machine to come up
	I0731 18:37:03.068203  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:03.068692  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:03.068733  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:03.068647  414751 retry.go:31] will retry after 1.659498365s: waiting for machine to come up
	I0731 18:37:04.729694  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:04.730087  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:04.730118  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:04.730024  414751 retry.go:31] will retry after 1.779116686s: waiting for machine to come up
	I0731 18:37:06.511383  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:06.511853  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:06.511884  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:06.511794  414751 retry.go:31] will retry after 3.278316837s: waiting for machine to come up
	I0731 18:37:09.792484  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:09.792916  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:09.792940  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:09.792886  414751 retry.go:31] will retry after 3.596881471s: waiting for machine to come up
	I0731 18:37:13.393517  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:13.393946  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find current IP address of domain ha-326651-m03 in network mk-ha-326651
	I0731 18:37:13.393970  413977 main.go:141] libmachine: (ha-326651-m03) DBG | I0731 18:37:13.393891  414751 retry.go:31] will retry after 3.454646204s: waiting for machine to come up
	I0731 18:37:16.850516  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:16.851033  413977 main.go:141] libmachine: (ha-326651-m03) Found IP for machine: 192.168.39.50
	I0731 18:37:16.851057  413977 main.go:141] libmachine: (ha-326651-m03) Reserving static IP address...
	I0731 18:37:16.851070  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has current primary IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:16.852215  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find host DHCP lease matching {name: "ha-326651-m03", mac: "52:54:00:4a:ff:37", ip: "192.168.39.50"} in network mk-ha-326651
	I0731 18:37:16.927588  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Getting to WaitForSSH function...
	I0731 18:37:16.927621  413977 main.go:141] libmachine: (ha-326651-m03) Reserved static IP address: 192.168.39.50
	I0731 18:37:16.927635  413977 main.go:141] libmachine: (ha-326651-m03) Waiting for SSH to be available...
	I0731 18:37:16.930121  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:16.930521  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651
	I0731 18:37:16.930551  413977 main.go:141] libmachine: (ha-326651-m03) DBG | unable to find defined IP address of network mk-ha-326651 interface with MAC address 52:54:00:4a:ff:37
	I0731 18:37:16.930736  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH client type: external
	I0731 18:37:16.930762  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa (-rw-------)
	I0731 18:37:16.930790  413977 main.go:141] libmachine: (ha-326651-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:37:16.930812  413977 main.go:141] libmachine: (ha-326651-m03) DBG | About to run SSH command:
	I0731 18:37:16.930823  413977 main.go:141] libmachine: (ha-326651-m03) DBG | exit 0
	I0731 18:37:16.934884  413977 main.go:141] libmachine: (ha-326651-m03) DBG | SSH cmd err, output: exit status 255: 
	I0731 18:37:16.934913  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0731 18:37:16.934924  413977 main.go:141] libmachine: (ha-326651-m03) DBG | command : exit 0
	I0731 18:37:16.934932  413977 main.go:141] libmachine: (ha-326651-m03) DBG | err     : exit status 255
	I0731 18:37:16.934957  413977 main.go:141] libmachine: (ha-326651-m03) DBG | output  : 
	I0731 18:37:19.935112  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Getting to WaitForSSH function...
	I0731 18:37:19.937438  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:19.937884  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:19.937918  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:19.938123  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH client type: external
	I0731 18:37:19.938150  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa (-rw-------)
	I0731 18:37:19.938184  413977 main.go:141] libmachine: (ha-326651-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.50 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 18:37:19.938203  413977 main.go:141] libmachine: (ha-326651-m03) DBG | About to run SSH command:
	I0731 18:37:19.938220  413977 main.go:141] libmachine: (ha-326651-m03) DBG | exit 0
	I0731 18:37:20.060867  413977 main.go:141] libmachine: (ha-326651-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 18:37:20.061148  413977 main.go:141] libmachine: (ha-326651-m03) KVM machine creation complete!
	I0731 18:37:20.061490  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetConfigRaw
	I0731 18:37:20.062097  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:20.062281  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:20.062461  413977 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 18:37:20.062480  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:37:20.063844  413977 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 18:37:20.063861  413977 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 18:37:20.063866  413977 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 18:37:20.063873  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.066216  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.066575  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.066593  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.066831  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.067010  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.067189  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.067345  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.067523  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.067813  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.067828  413977 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 18:37:20.172103  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:37:20.172143  413977 main.go:141] libmachine: Detecting the provisioner...
	I0731 18:37:20.172159  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.175645  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.176045  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.176076  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.176257  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.176527  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.176744  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.176895  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.177073  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.177292  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.177309  413977 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 18:37:20.281327  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 18:37:20.281404  413977 main.go:141] libmachine: found compatible host: buildroot
	I0731 18:37:20.281415  413977 main.go:141] libmachine: Provisioning with buildroot...
	I0731 18:37:20.281427  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:37:20.281717  413977 buildroot.go:166] provisioning hostname "ha-326651-m03"
	I0731 18:37:20.281747  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:37:20.281963  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.284626  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.285058  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.285091  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.285175  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.285384  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.285581  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.285736  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.285927  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.286222  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.286244  413977 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651-m03 && echo "ha-326651-m03" | sudo tee /etc/hostname
	I0731 18:37:20.403177  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651-m03
	
	I0731 18:37:20.403211  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.406056  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.406423  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.406453  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.406612  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.406798  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.406998  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.407102  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.407270  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.407437  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.407453  413977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:37:20.519447  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:37:20.519482  413977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:37:20.519499  413977 buildroot.go:174] setting up certificates
	I0731 18:37:20.519508  413977 provision.go:84] configureAuth start
	I0731 18:37:20.519517  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetMachineName
	I0731 18:37:20.519800  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:20.522557  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.522949  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.522976  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.523172  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.525648  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.525963  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.525999  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.526124  413977 provision.go:143] copyHostCerts
	I0731 18:37:20.526157  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:37:20.526191  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:37:20.526200  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:37:20.526261  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:37:20.526341  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:37:20.526359  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:37:20.526365  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:37:20.526388  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:37:20.526435  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:37:20.526451  413977 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:37:20.526457  413977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:37:20.526476  413977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:37:20.526524  413977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651-m03 san=[127.0.0.1 192.168.39.50 ha-326651-m03 localhost minikube]
	I0731 18:37:20.769988  413977 provision.go:177] copyRemoteCerts
	I0731 18:37:20.770051  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:37:20.770076  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.772989  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.773274  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.773304  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.773456  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.773676  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.773824  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.773976  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:20.856809  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:37:20.856890  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:37:20.882984  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:37:20.883068  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 18:37:20.909134  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:37:20.909222  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 18:37:20.933028  413977 provision.go:87] duration metric: took 413.504588ms to configureAuth
	I0731 18:37:20.933064  413977 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:37:20.933298  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:37:20.933377  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:20.936045  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.936362  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:20.936424  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:20.936608  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:20.936855  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.937035  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:20.937221  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:20.937398  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:20.937615  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:20.937634  413977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:37:21.200546  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:37:21.200577  413977 main.go:141] libmachine: Checking connection to Docker...
	I0731 18:37:21.200587  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetURL
	I0731 18:37:21.201925  413977 main.go:141] libmachine: (ha-326651-m03) DBG | Using libvirt version 6000000
	I0731 18:37:21.204087  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.204537  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.204558  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.204717  413977 main.go:141] libmachine: Docker is up and running!
	I0731 18:37:21.204732  413977 main.go:141] libmachine: Reticulating splines...
	I0731 18:37:21.204740  413977 client.go:171] duration metric: took 26.552956298s to LocalClient.Create
	I0731 18:37:21.204769  413977 start.go:167] duration metric: took 26.553031792s to libmachine.API.Create "ha-326651"
	I0731 18:37:21.204782  413977 start.go:293] postStartSetup for "ha-326651-m03" (driver="kvm2")
	I0731 18:37:21.204798  413977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:37:21.204833  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.205107  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:37:21.205135  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:21.207425  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.207784  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.207813  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.207930  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.208124  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.208275  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.208431  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:21.291787  413977 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:37:21.296302  413977 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:37:21.296338  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:37:21.296453  413977 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:37:21.296569  413977 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:37:21.296584  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:37:21.296787  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:37:21.308040  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:37:21.333597  413977 start.go:296] duration metric: took 128.798747ms for postStartSetup
	I0731 18:37:21.333658  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetConfigRaw
	I0731 18:37:21.334235  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:21.337257  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.337609  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.337639  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.337918  413977 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:37:21.338174  413977 start.go:128] duration metric: took 26.705444424s to createHost
	I0731 18:37:21.338200  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:21.340433  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.340727  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.340754  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.340982  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.341195  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.341366  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.341505  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.341643  413977 main.go:141] libmachine: Using SSH client type: native
	I0731 18:37:21.341799  413977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.50 22 <nil> <nil>}
	I0731 18:37:21.341808  413977 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:37:21.445509  413977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722451041.423659203
	
	I0731 18:37:21.445535  413977 fix.go:216] guest clock: 1722451041.423659203
	I0731 18:37:21.445546  413977 fix.go:229] Guest: 2024-07-31 18:37:21.423659203 +0000 UTC Remote: 2024-07-31 18:37:21.338186845 +0000 UTC m=+160.651491096 (delta=85.472358ms)
	I0731 18:37:21.445572  413977 fix.go:200] guest clock delta is within tolerance: 85.472358ms
	I0731 18:37:21.445577  413977 start.go:83] releasing machines lock for "ha-326651-m03", held for 26.812959209s
	I0731 18:37:21.445595  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.445940  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:21.449123  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.449558  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.449589  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.451678  413977 out.go:177] * Found network options:
	I0731 18:37:21.452816  413977 out.go:177]   - NO_PROXY=192.168.39.220,192.168.39.202
	W0731 18:37:21.453988  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 18:37:21.454008  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:37:21.454024  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.454513  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.454704  413977 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:37:21.454791  413977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:37:21.454836  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	W0731 18:37:21.454904  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 18:37:21.454919  413977 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 18:37:21.454983  413977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:37:21.454998  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:37:21.457457  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.457776  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.457801  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.457827  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.457943  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.458120  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.458239  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:21.458263  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:21.458272  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.458406  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:21.458441  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:37:21.458563  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:37:21.458678  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:37:21.458829  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:37:21.692148  413977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:37:21.698327  413977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:37:21.698395  413977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:37:21.718593  413977 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 18:37:21.718621  413977 start.go:495] detecting cgroup driver to use...
	I0731 18:37:21.718696  413977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:37:21.737923  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:37:21.753184  413977 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:37:21.753250  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:37:21.768064  413977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:37:21.784310  413977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:37:21.908161  413977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:37:22.083038  413977 docker.go:233] disabling docker service ...
	I0731 18:37:22.083124  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:37:22.098655  413977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:37:22.111970  413977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:37:22.232896  413977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:37:22.360924  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:37:22.376636  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:37:22.396880  413977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:37:22.396952  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.408234  413977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:37:22.408307  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.419945  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.431147  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.443100  413977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:37:22.454964  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.466805  413977 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.485897  413977 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:37:22.497176  413977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:37:22.507090  413977 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 18:37:22.507165  413977 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 18:37:22.521445  413977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:37:22.534157  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:37:22.676758  413977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:37:22.821966  413977 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:37:22.822039  413977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:37:22.828177  413977 start.go:563] Will wait 60s for crictl version
	I0731 18:37:22.828256  413977 ssh_runner.go:195] Run: which crictl
	I0731 18:37:22.832241  413977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:37:22.873183  413977 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:37:22.873288  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:37:22.903426  413977 ssh_runner.go:195] Run: crio --version
	I0731 18:37:22.933611  413977 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:37:22.935044  413977 out.go:177]   - env NO_PROXY=192.168.39.220
	I0731 18:37:22.936181  413977 out.go:177]   - env NO_PROXY=192.168.39.220,192.168.39.202
	I0731 18:37:22.937307  413977 main.go:141] libmachine: (ha-326651-m03) Calling .GetIP
	I0731 18:37:22.940145  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:22.940560  413977 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:37:22.940589  413977 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:37:22.940759  413977 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:37:22.945274  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:37:22.958156  413977 mustload.go:65] Loading cluster: ha-326651
	I0731 18:37:22.958434  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:37:22.958818  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:37:22.958887  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:37:22.974999  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0731 18:37:22.975528  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:37:22.976030  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:37:22.976067  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:37:22.976417  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:37:22.976611  413977 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:37:22.978267  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:37:22.978610  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:37:22.978650  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:37:22.993692  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0731 18:37:22.994091  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:37:22.994512  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:37:22.994533  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:37:22.994868  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:37:22.995063  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:37:22.995233  413977 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.50
	I0731 18:37:22.995246  413977 certs.go:194] generating shared ca certs ...
	I0731 18:37:22.995265  413977 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:37:22.995412  413977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:37:22.995450  413977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:37:22.995460  413977 certs.go:256] generating profile certs ...
	I0731 18:37:22.995528  413977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:37:22.995552  413977 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421
	I0731 18:37:22.995567  413977 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.202 192.168.39.50 192.168.39.254]
	I0731 18:37:23.355528  413977 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421 ...
	I0731 18:37:23.355565  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421: {Name:mkcf338dc55a624e933a8ac41432a2ed33c665ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:37:23.355767  413977 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421 ...
	I0731 18:37:23.355786  413977 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421: {Name:mk4c41ccc495694c66da6b0b64e94b8844359729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:37:23.355892  413977 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.cf9cc421 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:37:23.356052  413977 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.cf9cc421 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:37:23.356222  413977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:37:23.356244  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:37:23.356263  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:37:23.356280  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:37:23.356299  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:37:23.356320  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:37:23.356338  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:37:23.356359  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:37:23.356394  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:37:23.356463  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:37:23.356505  413977 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:37:23.356519  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:37:23.356555  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:37:23.356592  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:37:23.356620  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:37:23.356667  413977 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:37:23.356696  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.356710  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.356723  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:23.356763  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:37:23.359908  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:23.360318  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:37:23.360345  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:23.360527  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:37:23.360758  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:37:23.360946  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:37:23.361102  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:37:23.436854  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 18:37:23.442449  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 18:37:23.455049  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 18:37:23.461033  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 18:37:23.473443  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 18:37:23.478346  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 18:37:23.489292  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 18:37:23.493509  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 18:37:23.503713  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 18:37:23.507831  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 18:37:23.519242  413977 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 18:37:23.524301  413977 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0731 18:37:23.534575  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:37:23.561693  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:37:23.586179  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:37:23.610694  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:37:23.636016  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 18:37:23.660606  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:37:23.685418  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:37:23.709921  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:37:23.734138  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:37:23.758612  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:37:23.783065  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:37:23.807696  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 18:37:23.824745  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 18:37:23.842808  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 18:37:23.860365  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 18:37:23.876879  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 18:37:23.893606  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0731 18:37:23.909694  413977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 18:37:23.925716  413977 ssh_runner.go:195] Run: openssl version
	I0731 18:37:23.931613  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:37:23.942303  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.947004  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.947056  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:37:23.952885  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:37:23.963671  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:37:23.974424  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.979179  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.979249  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:37:23.985074  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:37:23.995420  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:37:24.005627  413977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:24.010052  413977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:24.010148  413977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:37:24.015982  413977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:37:24.026995  413977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:37:24.031492  413977 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 18:37:24.031554  413977 kubeadm.go:934] updating node {m03 192.168.39.50 8443 v1.30.3 crio true true} ...
	I0731 18:37:24.031661  413977 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:37:24.031693  413977 kube-vip.go:115] generating kube-vip config ...
	I0731 18:37:24.031735  413977 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:37:24.047475  413977 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:37:24.047569  413977 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:37:24.047638  413977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:37:24.058198  413977 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 18:37:24.058264  413977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 18:37:24.069883  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 18:37:24.069892  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 18:37:24.069923  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:37:24.069938  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:37:24.069942  413977 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 18:37:24.069961  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:37:24.070020  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 18:37:24.070030  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 18:37:24.080018  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 18:37:24.080065  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 18:37:24.080339  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 18:37:24.080362  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 18:37:24.094932  413977 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:37:24.095016  413977 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 18:37:24.208092  413977 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 18:37:24.208143  413977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 18:37:24.979760  413977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 18:37:24.990405  413977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 18:37:25.007798  413977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:37:25.024522  413977 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 18:37:25.041751  413977 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:37:25.046230  413977 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 18:37:25.059443  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:37:25.186943  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:37:25.207644  413977 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:37:25.208083  413977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:37:25.208125  413977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:37:25.225643  413977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0731 18:37:25.226224  413977 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:37:25.226824  413977 main.go:141] libmachine: Using API Version  1
	I0731 18:37:25.226856  413977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:37:25.227192  413977 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:37:25.227409  413977 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:37:25.227582  413977 start.go:317] joinCluster: &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:37:25.227764  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 18:37:25.227790  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:37:25.230925  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:25.231410  413977 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:37:25.231452  413977 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:37:25.231562  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:37:25.231748  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:37:25.231901  413977 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:37:25.232063  413977 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:37:25.531546  413977 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:37:25.531610  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fby254.sm0cc13ve70otyt8 --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m03 --control-plane --apiserver-advertise-address=192.168.39.50 --apiserver-bind-port=8443"
	I0731 18:37:49.867127  413977 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fby254.sm0cc13ve70otyt8 --discovery-token-ca-cert-hash sha256:89b0ff177877e0036362a39b8299c650e2eeac29ce665a2da69c1feead68c7bd --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-326651-m03 --control-plane --apiserver-advertise-address=192.168.39.50 --apiserver-bind-port=8443": (24.335481808s)
	I0731 18:37:49.867179  413977 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 18:37:50.378941  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-326651-m03 minikube.k8s.io/updated_at=2024_07_31T18_37_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c minikube.k8s.io/name=ha-326651 minikube.k8s.io/primary=false
	I0731 18:37:50.527273  413977 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-326651-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 18:37:50.656890  413977 start.go:319] duration metric: took 25.429303959s to joinCluster
	I0731 18:37:50.657001  413977 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 18:37:50.657367  413977 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:37:50.658501  413977 out.go:177] * Verifying Kubernetes components...
	I0731 18:37:50.660034  413977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:37:50.963606  413977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:37:51.019362  413977 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:37:51.019656  413977 kapi.go:59] client config for ha-326651: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 18:37:51.019725  413977 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I0731 18:37:51.019987  413977 node_ready.go:35] waiting up to 6m0s for node "ha-326651-m03" to be "Ready" ...
	I0731 18:37:51.020079  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:51.020090  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:51.020101  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:51.020111  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:51.023093  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:37:51.520174  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:51.520197  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:51.520209  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:51.520216  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:51.523954  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:52.020852  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:52.020928  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:52.020947  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:52.020959  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:52.024734  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:52.520559  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:52.520589  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:52.520600  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:52.520605  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:52.523898  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:53.020720  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:53.020743  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:53.020751  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:53.020754  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:53.024464  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:53.025297  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:37:53.520563  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:53.520585  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:53.520593  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:53.520596  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:53.524043  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:54.021117  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:54.021143  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:54.021154  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:54.021161  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:54.024853  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:54.521245  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:54.521275  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:54.521286  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:54.521290  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:54.525584  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:37:55.020575  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:55.020599  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:55.020608  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:55.020619  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:55.024041  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:55.521241  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:55.521267  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:55.521278  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:55.521285  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:55.524183  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:37:55.525023  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:37:56.020919  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:56.020978  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:56.020990  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:56.020996  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:56.024793  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:56.520999  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:56.521030  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:56.521039  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:56.521045  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:56.524880  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:57.020558  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:57.020583  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:57.020592  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:57.020595  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:57.024064  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:57.521232  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:57.521259  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:57.521270  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:57.521276  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:57.525457  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:37:57.526333  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:37:58.020399  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:58.020422  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:58.020432  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:58.020437  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:58.023824  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:58.521059  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:58.521085  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:58.521096  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:58.521103  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:58.525144  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:37:59.021061  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:59.021083  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:59.021092  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:59.021095  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:59.024397  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:37:59.520972  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:37:59.520997  413977 round_trippers.go:469] Request Headers:
	I0731 18:37:59.521005  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:37:59.521011  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:37:59.524720  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:00.020633  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:00.020673  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:00.020701  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:00.020706  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:00.024455  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:00.025230  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:00.520533  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:00.520557  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:00.520566  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:00.520570  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:00.523922  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:01.021019  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:01.021045  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:01.021054  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:01.021061  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:01.024556  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:01.520923  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:01.520950  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:01.520958  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:01.520964  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:01.524976  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:02.020970  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:02.020996  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:02.021007  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:02.021013  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:02.024935  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:02.025463  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:02.520959  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:02.520984  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:02.520993  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:02.520997  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:02.524873  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:03.020811  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:03.020833  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:03.020841  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:03.020845  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:03.024096  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:03.520978  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:03.520999  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:03.521008  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:03.521012  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:03.524688  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:04.020620  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:04.020644  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:04.020653  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:04.020658  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:04.024257  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:04.521191  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:04.521217  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:04.521227  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:04.521233  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:04.525225  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:04.525790  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:05.020940  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:05.020965  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:05.020973  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:05.020979  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:05.024447  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:05.520304  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:05.520329  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:05.520338  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:05.520343  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:05.523406  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:06.021018  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:06.021052  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:06.021062  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:06.021067  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:06.025126  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:06.520550  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:06.520575  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:06.520585  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:06.520591  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:06.523794  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:07.020912  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:07.020938  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:07.020947  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:07.020956  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:07.024848  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:07.025558  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:07.520941  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:07.520971  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:07.520980  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:07.520987  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:07.524549  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:08.020563  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:08.020586  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:08.020594  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:08.020598  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:08.024468  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:08.520334  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:08.520362  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:08.520388  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:08.520395  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:08.524025  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:09.021226  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:09.021251  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:09.021261  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:09.021266  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:09.024956  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:09.025585  413977 node_ready.go:53] node "ha-326651-m03" has status "Ready":"False"
	I0731 18:38:09.521043  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:09.521075  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:09.521089  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:09.521092  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:09.524908  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.020899  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:10.020929  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.020940  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.020947  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.026586  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:38:10.027154  413977 node_ready.go:49] node "ha-326651-m03" has status "Ready":"True"
	I0731 18:38:10.027177  413977 node_ready.go:38] duration metric: took 19.007174611s for node "ha-326651-m03" to be "Ready" ...
	I0731 18:38:10.027188  413977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:38:10.027258  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:10.027268  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.027276  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.027280  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.035717  413977 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0731 18:38:10.043200  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.043298  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hsr7k
	I0731 18:38:10.043306  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.043314  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.043319  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.046582  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.047642  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.047659  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.047667  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.047672  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.050840  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.051495  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.051515  413977 pod_ready.go:81] duration metric: took 8.283282ms for pod "coredns-7db6d8ff4d-hsr7k" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.051525  413977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.051600  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-p2tfn
	I0731 18:38:10.051608  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.051615  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.051619  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.054430  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.055531  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.055547  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.055555  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.055559  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.058540  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.059077  413977 pod_ready.go:92] pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.059101  413977 pod_ready.go:81] duration metric: took 7.57011ms for pod "coredns-7db6d8ff4d-p2tfn" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.059110  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.059168  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651
	I0731 18:38:10.059176  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.059183  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.059190  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.062091  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.062762  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.062778  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.062788  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.062794  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.066487  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.067039  413977 pod_ready.go:92] pod "etcd-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.067061  413977 pod_ready.go:81] duration metric: took 7.944797ms for pod "etcd-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.067070  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.067142  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m02
	I0731 18:38:10.067149  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.067157  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.067161  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.071867  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:10.072519  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:10.072535  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.072543  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.072546  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.075294  413977 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 18:38:10.075774  413977 pod_ready.go:92] pod "etcd-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.075799  413977 pod_ready.go:81] duration metric: took 8.721779ms for pod "etcd-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.075812  413977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.221100  413977 request.go:629] Waited for 145.199845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m03
	I0731 18:38:10.221193  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651-m03
	I0731 18:38:10.221198  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.221208  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.221211  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.225082  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.421064  413977 request.go:629] Waited for 195.324231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:10.421150  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:10.421158  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.421168  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.421177  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.424696  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.425371  413977 pod_ready.go:92] pod "etcd-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.425390  413977 pod_ready.go:81] duration metric: took 349.57135ms for pod "etcd-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.425406  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.621716  413977 request.go:629] Waited for 196.22376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:38:10.621796  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651
	I0731 18:38:10.621805  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.621816  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.621834  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.627527  413977 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 18:38:10.821394  413977 request.go:629] Waited for 193.164189ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.821454  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:10.821459  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:10.821466  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:10.821471  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:10.824875  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:10.825388  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:10.825411  413977 pod_ready.go:81] duration metric: took 399.998459ms for pod "kube-apiserver-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:10.825421  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.021931  413977 request.go:629] Waited for 196.409806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:38:11.021996  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m02
	I0731 18:38:11.022001  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.022009  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.022013  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.028369  413977 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 18:38:11.221479  413977 request.go:629] Waited for 192.390158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:11.221571  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:11.221577  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.221591  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.221598  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.225466  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:11.226265  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:11.226285  413977 pod_ready.go:81] duration metric: took 400.858148ms for pod "kube-apiserver-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.226295  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.421487  413977 request.go:629] Waited for 195.11476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m03
	I0731 18:38:11.421580  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651-m03
	I0731 18:38:11.421589  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.421600  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.421609  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.425699  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:11.621525  413977 request.go:629] Waited for 194.372228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:11.621602  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:11.621609  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.621617  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.621623  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.625368  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:11.625802  413977 pod_ready.go:92] pod "kube-apiserver-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:11.625820  413977 pod_ready.go:81] duration metric: took 399.518861ms for pod "kube-apiserver-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.625829  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:11.820973  413977 request.go:629] Waited for 195.0508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:38:11.821037  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651
	I0731 18:38:11.821043  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:11.821051  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:11.821057  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:11.825144  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:12.021362  413977 request.go:629] Waited for 195.36707ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:12.021423  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:12.021428  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.021436  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.021442  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.024957  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.025602  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:12.025620  413977 pod_ready.go:81] duration metric: took 399.784534ms for pod "kube-controller-manager-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.025630  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.221703  413977 request.go:629] Waited for 195.978806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:38:12.221780  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m02
	I0731 18:38:12.221787  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.221797  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.221805  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.225192  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.421416  413977 request.go:629] Waited for 195.354453ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:12.421489  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:12.421495  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.421503  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.421507  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.425421  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.425916  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:12.425934  413977 pod_ready.go:81] duration metric: took 400.298077ms for pod "kube-controller-manager-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.425943  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.620972  413977 request.go:629] Waited for 194.932661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m03
	I0731 18:38:12.621053  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-326651-m03
	I0731 18:38:12.621059  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.621067  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.621073  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.624964  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:12.821068  413977 request.go:629] Waited for 195.318196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:12.821177  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:12.821189  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:12.821201  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:12.821209  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:12.825278  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:12.825995  413977 pod_ready.go:92] pod "kube-controller-manager-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:12.826025  413977 pod_ready.go:81] duration metric: took 400.072019ms for pod "kube-controller-manager-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:12.826040  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.021215  413977 request.go:629] Waited for 195.095055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:38:13.021300  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj
	I0731 18:38:13.021306  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.021314  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.021321  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.025388  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:13.221247  413977 request.go:629] Waited for 195.267433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:13.221340  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:13.221346  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.221357  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.221366  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.225916  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:13.226871  413977 pod_ready.go:92] pod "kube-proxy-hg6sj" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:13.226891  413977 pod_ready.go:81] duration metric: took 400.843747ms for pod "kube-proxy-hg6sj" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.226901  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lhprb" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.420997  413977 request.go:629] Waited for 193.980744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhprb
	I0731 18:38:13.421086  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lhprb
	I0731 18:38:13.421094  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.421106  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.421117  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.424452  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:13.621523  413977 request.go:629] Waited for 196.378142ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:13.621596  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:13.621603  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.621611  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.621616  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.625410  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:13.626111  413977 pod_ready.go:92] pod "kube-proxy-lhprb" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:13.626136  413977 pod_ready.go:81] duration metric: took 399.227736ms for pod "kube-proxy-lhprb" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.626145  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:13.821285  413977 request.go:629] Waited for 195.069421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:38:13.821356  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-stqb2
	I0731 18:38:13.821362  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:13.821370  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:13.821375  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:13.825063  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.020994  413977 request.go:629] Waited for 195.299514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.021082  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.021090  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.021098  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.021102  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.025085  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.026142  413977 pod_ready.go:92] pod "kube-proxy-stqb2" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:14.026167  413977 pod_ready.go:81] duration metric: took 400.013833ms for pod "kube-proxy-stqb2" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.026179  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.221390  413977 request.go:629] Waited for 195.112801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:38:14.221451  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651
	I0731 18:38:14.221457  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.221467  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.221473  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.225827  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:14.421805  413977 request.go:629] Waited for 195.378126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:14.421877  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651
	I0731 18:38:14.421882  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.421890  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.421894  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.425460  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.426059  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:14.426086  413977 pod_ready.go:81] duration metric: took 399.894725ms for pod "kube-scheduler-ha-326651" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.426099  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.621177  413977 request.go:629] Waited for 194.98251ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:38:14.621273  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m02
	I0731 18:38:14.621285  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.621295  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.621304  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.624878  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.821926  413977 request.go:629] Waited for 196.372921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.821992  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m02
	I0731 18:38:14.821997  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:14.822006  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:14.822012  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:14.825529  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:14.826158  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:14.826179  413977 pod_ready.go:81] duration metric: took 400.068887ms for pod "kube-scheduler-ha-326651-m02" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:14.826188  413977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:15.021612  413977 request.go:629] Waited for 195.3289ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m03
	I0731 18:38:15.021684  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-326651-m03
	I0731 18:38:15.021691  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.021700  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.021706  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.025857  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:15.220980  413977 request.go:629] Waited for 194.283799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:15.221085  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-326651-m03
	I0731 18:38:15.221096  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.221107  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.221125  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.224598  413977 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 18:38:15.225281  413977 pod_ready.go:92] pod "kube-scheduler-ha-326651-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 18:38:15.225300  413977 pod_ready.go:81] duration metric: took 399.106803ms for pod "kube-scheduler-ha-326651-m03" in "kube-system" namespace to be "Ready" ...
	I0731 18:38:15.225311  413977 pod_ready.go:38] duration metric: took 5.198111046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 18:38:15.225329  413977 api_server.go:52] waiting for apiserver process to appear ...
	I0731 18:38:15.225387  413977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:38:15.243672  413977 api_server.go:72] duration metric: took 24.586631178s to wait for apiserver process to appear ...
	I0731 18:38:15.243711  413977 api_server.go:88] waiting for apiserver healthz status ...
	I0731 18:38:15.243743  413977 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I0731 18:38:15.248624  413977 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I0731 18:38:15.248719  413977 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I0731 18:38:15.248730  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.248742  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.248754  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.249814  413977 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 18:38:15.249890  413977 api_server.go:141] control plane version: v1.30.3
	I0731 18:38:15.249906  413977 api_server.go:131] duration metric: took 6.187462ms to wait for apiserver health ...
	I0731 18:38:15.249921  413977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 18:38:15.421359  413977 request.go:629] Waited for 171.338586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.421420  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.421425  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.421433  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.421437  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.430726  413977 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0731 18:38:15.437067  413977 system_pods.go:59] 24 kube-system pods found
	I0731 18:38:15.437103  413977 system_pods.go:61] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:38:15.437109  413977 system_pods.go:61] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:38:15.437114  413977 system_pods.go:61] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:38:15.437124  413977 system_pods.go:61] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:38:15.437128  413977 system_pods.go:61] "etcd-ha-326651-m03" [ad71c742-0bb9-4137-b09a-fae975369a6a] Running
	I0731 18:38:15.437132  413977 system_pods.go:61] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:38:15.437137  413977 system_pods.go:61] "kindnet-86n7r" [6430d759-54b9-44cb-b0d1-b36311f326ec] Running
	I0731 18:38:15.437141  413977 system_pods.go:61] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:38:15.437145  413977 system_pods.go:61] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:38:15.437150  413977 system_pods.go:61] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:38:15.437155  413977 system_pods.go:61] "kube-apiserver-ha-326651-m03" [e12967b2-20f8-4c88-9f13-24b09828a0bc] Running
	I0731 18:38:15.437161  413977 system_pods.go:61] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:38:15.437166  413977 system_pods.go:61] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:38:15.437175  413977 system_pods.go:61] "kube-controller-manager-ha-326651-m03" [9173f006-38ea-4e55-a4b7-447fc467725f] Running
	I0731 18:38:15.437181  413977 system_pods.go:61] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:38:15.437187  413977 system_pods.go:61] "kube-proxy-lhprb" [8959da87-d806-49dc-be69-c495fb8de9ff] Running
	I0731 18:38:15.437193  413977 system_pods.go:61] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:38:15.437201  413977 system_pods.go:61] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:38:15.437206  413977 system_pods.go:61] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:38:15.437212  413977 system_pods.go:61] "kube-scheduler-ha-326651-m03" [047e337d-b07a-4ca2-893a-2310b5c53319] Running
	I0731 18:38:15.437218  413977 system_pods.go:61] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:38:15.437225  413977 system_pods.go:61] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:38:15.437230  413977 system_pods.go:61] "kube-vip-ha-326651-m03" [ed447ffb-4803-476f-9c83-d3573aeb2f8a] Running
	I0731 18:38:15.437237  413977 system_pods.go:61] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:38:15.437246  413977 system_pods.go:74] duration metric: took 187.316741ms to wait for pod list to return data ...
	I0731 18:38:15.437258  413977 default_sa.go:34] waiting for default service account to be created ...
	I0731 18:38:15.621572  413977 request.go:629] Waited for 184.226167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:38:15.621654  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I0731 18:38:15.621661  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.621673  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.621693  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.626128  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:15.626281  413977 default_sa.go:45] found service account: "default"
	I0731 18:38:15.626301  413977 default_sa.go:55] duration metric: took 189.035538ms for default service account to be created ...
	I0731 18:38:15.626313  413977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 18:38:15.821678  413977 request.go:629] Waited for 195.265839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.821749  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I0731 18:38:15.821756  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:15.821768  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:15.821777  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:15.829324  413977 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 18:38:15.835995  413977 system_pods.go:86] 24 kube-system pods found
	I0731 18:38:15.836028  413977 system_pods.go:89] "coredns-7db6d8ff4d-hsr7k" [2e5422b4-4ebd-43f5-a062-d3be49c5be0a] Running
	I0731 18:38:15.836036  413977 system_pods.go:89] "coredns-7db6d8ff4d-p2tfn" [587a07ed-e2cf-40d1-8bc7-3800836f036e] Running
	I0731 18:38:15.836043  413977 system_pods.go:89] "etcd-ha-326651" [a6eff394-766d-4118-a7fc-ab4e19bfdefc] Running
	I0731 18:38:15.836051  413977 system_pods.go:89] "etcd-ha-326651-m02" [549a4bd0-ffca-4ad3-9133-319f4dbb0740] Running
	I0731 18:38:15.836056  413977 system_pods.go:89] "etcd-ha-326651-m03" [ad71c742-0bb9-4137-b09a-fae975369a6a] Running
	I0731 18:38:15.836061  413977 system_pods.go:89] "kindnet-7l9l7" [01baa55e-b953-475a-b2fd-3944223a6161] Running
	I0731 18:38:15.836067  413977 system_pods.go:89] "kindnet-86n7r" [6430d759-54b9-44cb-b0d1-b36311f326ec] Running
	I0731 18:38:15.836075  413977 system_pods.go:89] "kindnet-n7q8p" [70ddf674-b678-4b7b-bae7-fd62e1c87bb5] Running
	I0731 18:38:15.836082  413977 system_pods.go:89] "kube-apiserver-ha-326651" [faa98457-9ce7-4e25-b6f2-d5e4295e3fae] Running
	I0731 18:38:15.836089  413977 system_pods.go:89] "kube-apiserver-ha-326651-m02" [cfd22af7-b21a-48d7-af69-f90a903c89cf] Running
	I0731 18:38:15.836097  413977 system_pods.go:89] "kube-apiserver-ha-326651-m03" [e12967b2-20f8-4c88-9f13-24b09828a0bc] Running
	I0731 18:38:15.836110  413977 system_pods.go:89] "kube-controller-manager-ha-326651" [f4a1ef16-03ea-4717-8f6c-b6dc0a410200] Running
	I0731 18:38:15.836121  413977 system_pods.go:89] "kube-controller-manager-ha-326651-m02" [9e03b3bc-f592-4e20-9788-de5541fd61f6] Running
	I0731 18:38:15.836129  413977 system_pods.go:89] "kube-controller-manager-ha-326651-m03" [9173f006-38ea-4e55-a4b7-447fc467725f] Running
	I0731 18:38:15.836138  413977 system_pods.go:89] "kube-proxy-hg6sj" [40cf0ce9-4b32-45fb-adef-577d742e433a] Running
	I0731 18:38:15.836144  413977 system_pods.go:89] "kube-proxy-lhprb" [8959da87-d806-49dc-be69-c495fb8de9ff] Running
	I0731 18:38:15.836151  413977 system_pods.go:89] "kube-proxy-stqb2" [a79b8436-2c8b-417b-9746-f92a9194c191] Running
	I0731 18:38:15.836164  413977 system_pods.go:89] "kube-scheduler-ha-326651" [dd774dbd-9a78-4401-8a2c-bb4ec41a013e] Running
	I0731 18:38:15.836173  413977 system_pods.go:89] "kube-scheduler-ha-326651-m02" [c4eb76e8-8466-4824-985b-022acb2c1d31] Running
	I0731 18:38:15.836181  413977 system_pods.go:89] "kube-scheduler-ha-326651-m03" [047e337d-b07a-4ca2-893a-2310b5c53319] Running
	I0731 18:38:15.836190  413977 system_pods.go:89] "kube-vip-ha-326651" [55d22288-ccee-4e17-95b6-4a96e86fca09] Running
	I0731 18:38:15.836196  413977 system_pods.go:89] "kube-vip-ha-326651-m02" [275e0914-784c-4d91-845a-25d5d67ccb56] Running
	I0731 18:38:15.836205  413977 system_pods.go:89] "kube-vip-ha-326651-m03" [ed447ffb-4803-476f-9c83-d3573aeb2f8a] Running
	I0731 18:38:15.836211  413977 system_pods.go:89] "storage-provisioner" [83869540-accb-4a58-b094-6bdc6b4c1944] Running
	I0731 18:38:15.836225  413977 system_pods.go:126] duration metric: took 209.903247ms to wait for k8s-apps to be running ...
	I0731 18:38:15.836238  413977 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 18:38:15.836291  413977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:38:15.854916  413977 system_svc.go:56] duration metric: took 18.666909ms WaitForService to wait for kubelet
	I0731 18:38:15.854954  413977 kubeadm.go:582] duration metric: took 25.197919918s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:38:15.854984  413977 node_conditions.go:102] verifying NodePressure condition ...
	I0731 18:38:16.021541  413977 request.go:629] Waited for 166.470634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I0731 18:38:16.021634  413977 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I0731 18:38:16.021645  413977 round_trippers.go:469] Request Headers:
	I0731 18:38:16.021657  413977 round_trippers.go:473]     Accept: application/json, */*
	I0731 18:38:16.021663  413977 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 18:38:16.026196  413977 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 18:38:16.027352  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:38:16.027387  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:38:16.027401  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:38:16.027406  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:38:16.027411  413977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 18:38:16.027417  413977 node_conditions.go:123] node cpu capacity is 2
	I0731 18:38:16.027423  413977 node_conditions.go:105] duration metric: took 172.433035ms to run NodePressure ...
	I0731 18:38:16.027438  413977 start.go:241] waiting for startup goroutines ...
	I0731 18:38:16.027462  413977 start.go:255] writing updated cluster config ...
	I0731 18:38:16.027756  413977 ssh_runner.go:195] Run: rm -f paused
	I0731 18:38:16.079451  413977 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 18:38:16.081596  413977 out.go:177] * Done! kubectl is now configured to use "ha-326651" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.606948730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451383606921813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7690123-973c-409f-98ea-29b228c938f5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.607680939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0316905-9c71-4ea7-81ea-5c6237432210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.607737593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0316905-9c71-4ea7-81ea-5c6237432210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.608613873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0316905-9c71-4ea7-81ea-5c6237432210 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.655617111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=596a27ee-361c-4c4e-b7a1-ddd98477fcd0 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.655693791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=596a27ee-361c-4c4e-b7a1-ddd98477fcd0 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.657121180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa4a7c80-73e9-4890-b168-0e456652c0ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.657602131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451383657580280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa4a7c80-73e9-4890-b168-0e456652c0ec name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.658312754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68de400b-4364-4a08-aca0-897cc3f7f158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.658363807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68de400b-4364-4a08-aca0-897cc3f7f158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.658732098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68de400b-4364-4a08-aca0-897cc3f7f158 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.698799537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1227b62-a1e7-4735-8a8a-2d5d3c8758ba name=/runtime.v1.RuntimeService/Version
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.698871117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1227b62-a1e7-4735-8a8a-2d5d3c8758ba name=/runtime.v1.RuntimeService/Version
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.700340571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=edd6d060-81d6-4254-9f5c-15442a9636f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.700860988Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451383700837308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=edd6d060-81d6-4254-9f5c-15442a9636f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.701537755Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa3b59e1-daa9-464b-89c7-f909e64f7b27 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.701591056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa3b59e1-daa9-464b-89c7-f909e64f7b27 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.701818399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa3b59e1-daa9-464b-89c7-f909e64f7b27 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.743357951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a10148d-cd5c-474e-a661-a7ea9c4ff726 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.743430531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a10148d-cd5c-474e-a661-a7ea9c4ff726 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.744622242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e63c7dd9-ed23-48ca-a494-8972fa5efc7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.745461717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451383745435675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e63c7dd9-ed23-48ca-a494-8972fa5efc7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.745896422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a92e710-6ab9-44e2-91e5-8875c6087092 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.745946383Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a92e710-6ab9-44e2-91e5-8875c6087092 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:43:03 ha-326651 crio[679]: time="2024-07-31 18:43:03.746378533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451100226008899,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e606e2ddae6cab3c4bb52e7f430e5803e2d522adaa2e9f976881b747b6f98338,PodSandboxId:bba5c545e084b4e3f38b874bb038194ecb669868ec275c9ea5488080cc6def61,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722450950641028456,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950607807618,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722450950608998244,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4e
bd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722450938614965965,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172245093
4538748418,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:753a0f44161e1de2225d14da3225467d7152f5860e72007275378a7ccc527ab7,PodSandboxId:e88355a2eb9b7dfdd6ba325b5d009657c6bbc43b18e9fe7095bfe623cbc34320,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224509175
19285946,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 06391a0c9df2fa93c4dd985124e038bd,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9,PodSandboxId:8b070b038e8acdce055c2987fe0b101d4e86ac1a5ea35db7d16a96f0aaedd58a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722450914160828958,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722450914121802189,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c,PodSandboxId:4e3fbd67a5009c0d45130b21520caecfd2092fca9a8d843e592273a356bc4d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722450914042973680,Labels:map[string]string{io.kubernetes.container.name: ku
be-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722450913996111279,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a92e710-6ab9-44e2-91e5-8875c6087092 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f413f75c91415       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   25be6f24676d4       busybox-fc5497c4f-mknlp
	e606e2ddae6ca       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   bba5c545e084b       storage-provisioner
	68c50c65ea238       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   d651e4190c72a       coredns-7db6d8ff4d-hsr7k
	36f0c9b04bb2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   8a4d6fb11ec09       coredns-7db6d8ff4d-p2tfn
	81362a0e08184       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   8783b79032fde       kindnet-n7q8p
	5abc9372bd5fd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   4ed8613feb5ec       kube-proxy-hg6sj
	753a0f44161e1       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   e88355a2eb9b7       kube-vip-ha-326651
	a34e4c7715d1c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   8b070b038e8ac       kube-apiserver-ha-326651
	c40e9679adc35       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   4bc17ce1c9d2f       kube-scheduler-ha-326651
	44a042c1af736       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   4e3fbd67a5009       kube-controller-manager-ha-326651
	bd3d8dbedb96a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   1e765f5d9b3b0       etcd-ha-326651
	
	
	==> coredns [36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7] <==
	[INFO] 10.244.1.2:47344 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198871s
	[INFO] 10.244.1.2:38776 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144715s
	[INFO] 10.244.1.2:41083 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003145331s
	[INFO] 10.244.1.2:43785 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150568s
	[INFO] 10.244.2.2:50028 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001760635s
	[INFO] 10.244.2.2:45304 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093222s
	[INFO] 10.244.2.2:36540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000140369s
	[INFO] 10.244.0.4:43466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105451s
	[INFO] 10.244.0.4:43878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152423s
	[INFO] 10.244.0.4:49227 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079008s
	[INFO] 10.244.0.4:47339 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074836s
	[INFO] 10.244.0.4:60002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056953s
	[INFO] 10.244.1.2:60772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013788s
	[INFO] 10.244.1.2:34997 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091978s
	[INFO] 10.244.2.2:48501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137292s
	[INFO] 10.244.2.2:41701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113322s
	[INFO] 10.244.2.2:46841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192541s
	[INFO] 10.244.2.2:37979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066316s
	[INFO] 10.244.0.4:41261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093714s
	[INFO] 10.244.0.4:56128 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073138s
	[INFO] 10.244.1.2:60703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131127s
	[INFO] 10.244.1.2:47436 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239598s
	[INFO] 10.244.1.2:57459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181068s
	[INFO] 10.244.2.2:56898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174969s
	[INFO] 10.244.2.2:33868 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108451s
	
	
	==> coredns [68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33] <==
	[INFO] 10.244.2.2:57152 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000266349s
	[INFO] 10.244.2.2:48987 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.0027298s
	[INFO] 10.244.0.4:46694 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002128574s
	[INFO] 10.244.1.2:43669 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113288s
	[INFO] 10.244.1.2:41521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133016s
	[INFO] 10.244.1.2:38952 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000113284s
	[INFO] 10.244.2.2:37151 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132014s
	[INFO] 10.244.2.2:52172 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280659s
	[INFO] 10.244.2.2:43370 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363635s
	[INFO] 10.244.2.2:52527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117452s
	[INFO] 10.244.2.2:48596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117278s
	[INFO] 10.244.0.4:55816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001992063s
	[INFO] 10.244.0.4:33045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291238s
	[INFO] 10.244.0.4:37880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043099s
	[INFO] 10.244.1.2:40143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128845s
	[INFO] 10.244.1.2:48970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131569s
	[INFO] 10.244.0.4:57102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075402s
	[INFO] 10.244.0.4:54508 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004372s
	[INFO] 10.244.1.2:37053 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000194922s
	[INFO] 10.244.2.2:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129881s
	[INFO] 10.244.2.2:48437 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148815s
	[INFO] 10.244.0.4:50060 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094079s
	[INFO] 10.244.0.4:42736 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105289s
	[INFO] 10.244.0.4:43280 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052254s
	[INFO] 10.244.0.4:47658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074002s
	
	
	==> describe nodes <==
	Name:               ha-326651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_35_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:42:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:38:23 +0000   Wed, 31 Jul 2024 18:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-326651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 419482855e6c4b5d814fd4a3e9e4847f
	  System UUID:                41948285-5e6c-4b5d-814f-d4a3e9e4847f
	  Boot ID:                    87f7122f-f0c1-4fc2-964d-0fcb352e2937
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mknlp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 coredns-7db6d8ff4d-hsr7k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m30s
	  kube-system                 coredns-7db6d8ff4d-p2tfn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m30s
	  kube-system                 etcd-ha-326651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m45s
	  kube-system                 kindnet-n7q8p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m31s
	  kube-system                 kube-apiserver-ha-326651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  kube-system                 kube-controller-manager-ha-326651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m44s
	  kube-system                 kube-proxy-hg6sj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 kube-scheduler-ha-326651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 kube-vip-ha-326651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m28s  kube-proxy       
	  Normal  Starting                 7m44s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m44s  kubelet          Node ha-326651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m44s  kubelet          Node ha-326651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m44s  kubelet          Node ha-326651 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m31s  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal  NodeReady                7m14s  kubelet          Node ha-326651 status is now: NodeReady
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal  RegisteredNode           5m     node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	
	
	Name:               ha-326651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_36_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:36:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:39:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 18:38:31 +0000   Wed, 31 Jul 2024 18:40:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-326651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e6699cde3924aaf94b25ab366c2acb8
	  System UUID:                2e6699cd-e392-4aaf-94b2-5ab366c2acb8
	  Boot ID:                    5c1932c2-b9e7-4809-bb21-3c186514aaf1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cs6t8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 etcd-ha-326651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m33s
	  kube-system                 kindnet-7l9l7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-apiserver-ha-326651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m33s
	  kube-system                 kube-controller-manager-ha-326651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-proxy-stqb2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-scheduler-ha-326651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-vip-ha-326651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node ha-326651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m31s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  NodeNotReady             2m50s                  node-controller  Node ha-326651-m02 status is now: NodeNotReady
	
	
	Name:               ha-326651-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_37_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:37:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:42:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:38:47 +0000   Wed, 31 Jul 2024 18:38:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    ha-326651-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5e4f78408f84c3ebbac53526a1e33d5
	  System UUID:                b5e4f784-08f8-4c3e-bbac-53526a1e33d5
	  Boot ID:                    2718d67d-347e-4fc9-8721-5da654c627d8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lgg6t                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 etcd-ha-326651-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-86n7r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m18s
	  kube-system                 kube-apiserver-ha-326651-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-ha-326651-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-proxy-lhprb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-scheduler-ha-326651-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-vip-ha-326651-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node ha-326651-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node ha-326651-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node ha-326651-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal  RegisteredNode           5m                     node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	
	
	Name:               ha-326651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:38:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:43:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:38:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:38:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:38:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:39:26 +0000   Wed, 31 Jul 2024 18:39:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-326651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbaa436975294cf08fb310ae9ef7d64d
	  System UUID:                cbaa4369-7529-4cf0-8fb3-10ae9ef7d64d
	  Boot ID:                    1d6cf453-df7b-4ae4-8590-9f364b6fc76f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nmwh7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m8s
	  kube-system                 kube-proxy-2nq9j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x2 over 4m9s)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x2 over 4m9s)  kubelet          Node ha-326651-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x2 over 4m9s)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal  NodeReady                3m47s                kubelet          Node ha-326651-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 18:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050750] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.802354] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.525465] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.557020] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:35] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.063136] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063799] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.163467] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.151948] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.299453] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.312604] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.062376] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.195979] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +1.049374] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.105366] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.092707] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.338531] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.117589] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 18:36] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a] <==
	{"level":"warn","ts":"2024-07-31T18:43:03.966602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.029543Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.033727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.039723Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.043688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.057256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.064945Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.073062Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.078337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.082915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.0916Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.097774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.104776Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.10881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.112566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.121555Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.132657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.133034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.139897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.143894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.148232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.157961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.170808Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.180486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T18:43:04.233253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:43:04 up 8 min,  0 users,  load average: 0.16, 0.23, 0.16
	Linux ha-326651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821] <==
	I0731 18:42:29.763500       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:42:39.756491       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:42:39.756606       1 main.go:299] handling current node
	I0731 18:42:39.756637       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:42:39.756655       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:42:39.756943       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:42:39.756985       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:42:39.757101       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:42:39.757123       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:42:49.760445       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:42:49.760550       1 main.go:299] handling current node
	I0731 18:42:49.760578       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:42:49.760596       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:42:49.760845       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:42:49.760890       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:42:49.760967       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:42:49.760986       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:42:59.764496       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:42:59.764566       1 main.go:299] handling current node
	I0731 18:42:59.764581       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:42:59.764587       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:42:59.764739       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:42:59.764761       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:42:59.764866       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:42:59.764906       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a34e4c7715d1c6998e94d6278f41c33d095a849b87414e8eb00fddf7b3007da9] <==
	E0731 18:38:22.210353       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36022: use of closed network connection
	E0731 18:38:22.397528       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36046: use of closed network connection
	E0731 18:38:22.587433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36072: use of closed network connection
	E0731 18:38:22.788125       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36096: use of closed network connection
	E0731 18:38:22.984460       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36112: use of closed network connection
	E0731 18:38:23.156532       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36122: use of closed network connection
	E0731 18:38:23.443380       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36140: use of closed network connection
	E0731 18:38:23.624792       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36156: use of closed network connection
	E0731 18:38:23.812780       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36170: use of closed network connection
	E0731 18:38:24.011428       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36186: use of closed network connection
	E0731 18:38:24.197406       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36206: use of closed network connection
	E0731 18:38:24.394769       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36222: use of closed network connection
	I0731 18:38:59.150272       1 trace.go:236] Trace[1721628771]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:bb87a20f-b62f-4f72-ab5b-07163d19ba59,client:192.168.39.17,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (31-Jul-2024 18:38:58.473) (total time: 677ms):
	Trace[1721628771]: ---"watchCache locked acquired" 674ms (18:38:59.147)
	Trace[1721628771]: [677.11243ms] [677.11243ms] END
	I0731 18:38:59.154259       1 trace.go:236] Trace[323589200]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a29d14a2-066d-4ed9-b0a4-d9f8dd6cb7e6,client:192.168.39.17,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (31-Jul-2024 18:38:58.475) (total time: 679ms):
	Trace[323589200]: ---"watchCache locked acquired" 675ms (18:38:59.151)
	Trace[323589200]: [679.156339ms] [679.156339ms] END
	I0731 18:38:59.156080       1 trace.go:236] Trace[57705693]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8ea97a64-53dd-4514-8f03-4ce418c8f3f0,client:192.168.39.17,api-group:,api-version:v1,name:kube-proxy-2nq9j,subresource:status,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-2nq9j/status,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PATCH (31-Jul-2024 18:38:58.300) (total time: 855ms):
	Trace[57705693]: ["GuaranteedUpdate etcd3" audit-id:8ea97a64-53dd-4514-8f03-4ce418c8f3f0,key:/pods/kube-system/kube-proxy-2nq9j,type:*core.Pod,resource:pods 855ms (18:38:58.300)
	Trace[57705693]:  ---"Txn call completed" 362ms (18:38:58.665)
	Trace[57705693]:  ---"Txn call completed" 486ms (18:38:59.155)]
	Trace[57705693]: ---"About to apply patch" 362ms (18:38:58.666)
	Trace[57705693]: ---"Object stored in database" 486ms (18:38:59.155)
	Trace[57705693]: [855.742205ms] [855.742205ms] END
	
	
	==> kube-controller-manager [44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c] <==
	I0731 18:38:17.079987       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.765766ms"
	I0731 18:38:17.187017       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.825986ms"
	I0731 18:38:17.366206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="179.048099ms"
	I0731 18:38:17.470263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="103.912729ms"
	E0731 18:38:17.470317       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0731 18:38:17.470435       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.911µs"
	I0731 18:38:17.483466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.859µs"
	I0731 18:38:19.551693       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.686µs"
	I0731 18:38:20.362733       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.568µs"
	I0731 18:38:21.039328       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.741076ms"
	I0731 18:38:21.039498       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.129µs"
	I0731 18:38:21.079118       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.707212ms"
	I0731 18:38:21.079285       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.776µs"
	I0731 18:38:21.156480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.571818ms"
	I0731 18:38:21.170495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.949659ms"
	I0731 18:38:21.170665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.991µs"
	E0731 18:38:55.866748       1 certificate_controller.go:146] Sync csr-97qqp failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-97qqp": the object has been modified; please apply your changes to the latest version and try again
	E0731 18:38:55.883803       1 certificate_controller.go:146] Sync csr-97qqp failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-97qqp": the object has been modified; please apply your changes to the latest version and try again
	I0731 18:38:56.129906       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-326651-m04\" does not exist"
	I0731 18:38:56.148636       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-326651-m04" podCIDRs=["10.244.3.0/24"]
	I0731 18:38:59.162188       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326651-m04"
	I0731 18:39:17.616258       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-326651-m04"
	I0731 18:40:14.207730       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-326651-m04"
	I0731 18:40:14.312451       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.09684ms"
	I0731 18:40:14.321029       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="217.231µs"
	
	
	==> kube-proxy [5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd] <==
	I0731 18:35:34.995697       1 server_linux.go:69] "Using iptables proxy"
	I0731 18:35:35.014905       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	I0731 18:35:35.098687       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 18:35:35.098748       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 18:35:35.098767       1 server_linux.go:165] "Using iptables Proxier"
	I0731 18:35:35.111456       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:35:35.114373       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:35:35.114444       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:35:35.115994       1 config.go:192] "Starting service config controller"
	I0731 18:35:35.116291       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:35:35.116386       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:35:35.116409       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:35:35.118469       1 config.go:319] "Starting node config controller"
	I0731 18:35:35.118498       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 18:35:35.217304       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 18:35:35.217423       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:35:35.218727       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd] <==
	W0731 18:35:18.555306       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 18:35:18.555375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0731 18:35:20.928367       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 18:37:46.964925       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-86n7r\": pod kindnet-86n7r is already assigned to node \"ha-326651-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-86n7r" node="ha-326651-m03"
	E0731 18:37:46.965044       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6430d759-54b9-44cb-b0d1-b36311f326ec(kube-system/kindnet-86n7r) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-86n7r"
	E0731 18:37:46.965068       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-86n7r\": pod kindnet-86n7r is already assigned to node \"ha-326651-m03\"" pod="kube-system/kindnet-86n7r"
	I0731 18:37:46.965105       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-86n7r" node="ha-326651-m03"
	I0731 18:38:17.014079       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="1e43b299-d997-4fdd-a163-a9bd587eec7e" pod="default/busybox-fc5497c4f-cs6t8" assumedNode="ha-326651-m02" currentNode="ha-326651-m03"
	E0731 18:38:17.034903       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cs6t8\": pod busybox-fc5497c4f-cs6t8 is already assigned to node \"ha-326651-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-cs6t8" node="ha-326651-m03"
	E0731 18:38:17.035002       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 1e43b299-d997-4fdd-a163-a9bd587eec7e(default/busybox-fc5497c4f-cs6t8) was assumed on ha-326651-m03 but assigned to ha-326651-m02" pod="default/busybox-fc5497c4f-cs6t8"
	E0731 18:38:17.035033       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-cs6t8\": pod busybox-fc5497c4f-cs6t8 is already assigned to node \"ha-326651-m02\"" pod="default/busybox-fc5497c4f-cs6t8"
	I0731 18:38:17.035091       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-cs6t8" node="ha-326651-m02"
	E0731 18:38:17.081837       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lgg6t\": pod busybox-fc5497c4f-lgg6t is already assigned to node \"ha-326651-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-lgg6t" node="ha-326651-m03"
	E0731 18:38:17.081987       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 8cd9612b-0afd-4dde-8ff1-6f8cd620a767(default/busybox-fc5497c4f-lgg6t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-lgg6t"
	E0731 18:38:17.082020       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-lgg6t\": pod busybox-fc5497c4f-lgg6t is already assigned to node \"ha-326651-m03\"" pod="default/busybox-fc5497c4f-lgg6t"
	I0731 18:38:17.082042       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-lgg6t" node="ha-326651-m03"
	E0731 18:38:17.086599       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mknlp\": pod busybox-fc5497c4f-mknlp is already assigned to node \"ha-326651\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-mknlp" node="ha-326651"
	E0731 18:38:17.086669       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 15a3f7d9-8405-4304-87da-8962e2d81f4e(default/busybox-fc5497c4f-mknlp) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-mknlp"
	E0731 18:38:17.086689       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-mknlp\": pod busybox-fc5497c4f-mknlp is already assigned to node \"ha-326651\"" pod="default/busybox-fc5497c4f-mknlp"
	I0731 18:38:17.086721       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-mknlp" node="ha-326651"
	E0731 18:38:56.213910       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-nmwh7\": pod kindnet-nmwh7 is already assigned to node \"ha-326651-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-nmwh7" node="ha-326651-m04"
	E0731 18:38:56.214255       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-nmwh7\": pod kindnet-nmwh7 is already assigned to node \"ha-326651-m04\"" pod="kube-system/kindnet-nmwh7"
	I0731 18:38:56.216255       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-nmwh7" node="ha-326651-m04"
	E0731 18:38:56.241628       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sk5s9\": pod kube-proxy-sk5s9 is already assigned to node \"ha-326651-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sk5s9" node="ha-326651-m04"
	E0731 18:38:56.241729       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sk5s9\": pod kube-proxy-sk5s9 is already assigned to node \"ha-326651-m04\"" pod="kube-system/kube-proxy-sk5s9"
	
	
	==> kubelet <==
	Jul 31 18:38:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:38:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:38:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:38:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:38:22 ha-326651 kubelet[1381]: E0731 18:38:22.018940    1381 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34184->127.0.0.1:32875: write tcp 127.0.0.1:34184->127.0.0.1:32875: write: broken pipe
	Jul 31 18:39:20 ha-326651 kubelet[1381]: E0731 18:39:20.358535    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:39:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:39:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:39:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:39:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:40:20 ha-326651 kubelet[1381]: E0731 18:40:20.360365    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:40:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:40:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:40:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:40:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:41:20 ha-326651 kubelet[1381]: E0731 18:41:20.356206    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:41:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:41:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:41:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:41:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:42:20 ha-326651 kubelet[1381]: E0731 18:42:20.356910    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:42:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:42:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:42:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:42:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326651 -n ha-326651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-326651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-326651 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-326651 -v=7 --alsologtostderr
E0731 18:43:48.017436  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:44:15.702761  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-326651 -v=7 --alsologtostderr: exit status 82 (2m1.867748084s)

                                                
                                                
-- stdout --
	* Stopping node "ha-326651-m04"  ...
	* Stopping node "ha-326651-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:43:05.697732  419780 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:43:05.697891  419780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:43:05.697902  419780 out.go:304] Setting ErrFile to fd 2...
	I0731 18:43:05.697908  419780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:43:05.698109  419780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:43:05.698370  419780 out.go:298] Setting JSON to false
	I0731 18:43:05.698502  419780 mustload.go:65] Loading cluster: ha-326651
	I0731 18:43:05.698872  419780 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:43:05.698973  419780 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:43:05.699179  419780 mustload.go:65] Loading cluster: ha-326651
	I0731 18:43:05.699339  419780 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:43:05.699400  419780 stop.go:39] StopHost: ha-326651-m04
	I0731 18:43:05.699800  419780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:05.699876  419780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:05.715954  419780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33909
	I0731 18:43:05.716500  419780 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:05.717134  419780 main.go:141] libmachine: Using API Version  1
	I0731 18:43:05.717161  419780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:05.717539  419780 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:05.720021  419780 out.go:177] * Stopping node "ha-326651-m04"  ...
	I0731 18:43:05.721377  419780 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 18:43:05.721416  419780 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:43:05.721649  419780 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 18:43:05.721691  419780 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:43:05.724398  419780 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:43:05.724827  419780 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:38:39 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:43:05.724865  419780 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:43:05.725053  419780 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:43:05.725225  419780 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:43:05.725403  419780 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:43:05.725544  419780 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:43:05.815859  419780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 18:43:05.869360  419780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 18:43:05.922697  419780 main.go:141] libmachine: Stopping "ha-326651-m04"...
	I0731 18:43:05.922747  419780 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:43:05.924328  419780 main.go:141] libmachine: (ha-326651-m04) Calling .Stop
	I0731 18:43:05.927751  419780 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 0/120
	I0731 18:43:07.091218  419780 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:43:07.092553  419780 main.go:141] libmachine: Machine "ha-326651-m04" was stopped.
	I0731 18:43:07.092570  419780 stop.go:75] duration metric: took 1.371197261s to stop
	I0731 18:43:07.092597  419780 stop.go:39] StopHost: ha-326651-m03
	I0731 18:43:07.093092  419780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:43:07.093178  419780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:43:07.108262  419780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I0731 18:43:07.108847  419780 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:43:07.109536  419780 main.go:141] libmachine: Using API Version  1
	I0731 18:43:07.109560  419780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:43:07.109889  419780 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:43:07.112458  419780 out.go:177] * Stopping node "ha-326651-m03"  ...
	I0731 18:43:07.113891  419780 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 18:43:07.113925  419780 main.go:141] libmachine: (ha-326651-m03) Calling .DriverName
	I0731 18:43:07.114173  419780 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 18:43:07.114199  419780 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHHostname
	I0731 18:43:07.117055  419780 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:43:07.117528  419780 main.go:141] libmachine: (ha-326651-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4a:ff:37", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:37:09 +0000 UTC Type:0 Mac:52:54:00:4a:ff:37 Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:ha-326651-m03 Clientid:01:52:54:00:4a:ff:37}
	I0731 18:43:07.117563  419780 main.go:141] libmachine: (ha-326651-m03) DBG | domain ha-326651-m03 has defined IP address 192.168.39.50 and MAC address 52:54:00:4a:ff:37 in network mk-ha-326651
	I0731 18:43:07.117711  419780 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHPort
	I0731 18:43:07.117925  419780 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHKeyPath
	I0731 18:43:07.118080  419780 main.go:141] libmachine: (ha-326651-m03) Calling .GetSSHUsername
	I0731 18:43:07.118195  419780 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m03/id_rsa Username:docker}
	I0731 18:43:07.206139  419780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 18:43:07.259349  419780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 18:43:07.314192  419780 main.go:141] libmachine: Stopping "ha-326651-m03"...
	I0731 18:43:07.314224  419780 main.go:141] libmachine: (ha-326651-m03) Calling .GetState
	I0731 18:43:07.315910  419780 main.go:141] libmachine: (ha-326651-m03) Calling .Stop
	I0731 18:43:07.319552  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 0/120
	I0731 18:43:08.320948  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 1/120
	I0731 18:43:09.322372  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 2/120
	I0731 18:43:10.323570  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 3/120
	I0731 18:43:11.325013  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 4/120
	I0731 18:43:12.327228  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 5/120
	I0731 18:43:13.328759  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 6/120
	I0731 18:43:14.330989  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 7/120
	I0731 18:43:15.332567  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 8/120
	I0731 18:43:16.334050  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 9/120
	I0731 18:43:17.336126  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 10/120
	I0731 18:43:18.337861  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 11/120
	I0731 18:43:19.339526  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 12/120
	I0731 18:43:20.341172  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 13/120
	I0731 18:43:21.342680  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 14/120
	I0731 18:43:22.344613  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 15/120
	I0731 18:43:23.346330  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 16/120
	I0731 18:43:24.347990  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 17/120
	I0731 18:43:25.349723  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 18/120
	I0731 18:43:26.351292  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 19/120
	I0731 18:43:27.352902  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 20/120
	I0731 18:43:28.354712  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 21/120
	I0731 18:43:29.356558  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 22/120
	I0731 18:43:30.357841  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 23/120
	I0731 18:43:31.359433  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 24/120
	I0731 18:43:32.361485  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 25/120
	I0731 18:43:33.363059  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 26/120
	I0731 18:43:34.364578  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 27/120
	I0731 18:43:35.366084  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 28/120
	I0731 18:43:36.367673  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 29/120
	I0731 18:43:37.369763  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 30/120
	I0731 18:43:38.371373  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 31/120
	I0731 18:43:39.373161  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 32/120
	I0731 18:43:40.374840  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 33/120
	I0731 18:43:41.376345  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 34/120
	I0731 18:43:42.378289  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 35/120
	I0731 18:43:43.379824  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 36/120
	I0731 18:43:44.381325  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 37/120
	I0731 18:43:45.382836  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 38/120
	I0731 18:43:46.384169  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 39/120
	I0731 18:43:47.386325  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 40/120
	I0731 18:43:48.387619  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 41/120
	I0731 18:43:49.389182  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 42/120
	I0731 18:43:50.390594  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 43/120
	I0731 18:43:51.391963  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 44/120
	I0731 18:43:52.393838  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 45/120
	I0731 18:43:53.395230  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 46/120
	I0731 18:43:54.396569  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 47/120
	I0731 18:43:55.398841  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 48/120
	I0731 18:43:56.400242  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 49/120
	I0731 18:43:57.402175  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 50/120
	I0731 18:43:58.403531  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 51/120
	I0731 18:43:59.405030  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 52/120
	I0731 18:44:00.406537  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 53/120
	I0731 18:44:01.408015  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 54/120
	I0731 18:44:02.409319  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 55/120
	I0731 18:44:03.410745  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 56/120
	I0731 18:44:04.412403  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 57/120
	I0731 18:44:05.414631  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 58/120
	I0731 18:44:06.416020  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 59/120
	I0731 18:44:07.417971  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 60/120
	I0731 18:44:08.419367  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 61/120
	I0731 18:44:09.420819  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 62/120
	I0731 18:44:10.422138  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 63/120
	I0731 18:44:11.423977  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 64/120
	I0731 18:44:12.425898  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 65/120
	I0731 18:44:13.427216  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 66/120
	I0731 18:44:14.428789  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 67/120
	I0731 18:44:15.430124  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 68/120
	I0731 18:44:16.431859  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 69/120
	I0731 18:44:17.433514  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 70/120
	I0731 18:44:18.434875  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 71/120
	I0731 18:44:19.436267  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 72/120
	I0731 18:44:20.438297  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 73/120
	I0731 18:44:21.439673  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 74/120
	I0731 18:44:22.441554  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 75/120
	I0731 18:44:23.442909  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 76/120
	I0731 18:44:24.444446  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 77/120
	I0731 18:44:25.445816  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 78/120
	I0731 18:44:26.447291  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 79/120
	I0731 18:44:27.449116  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 80/120
	I0731 18:44:28.450626  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 81/120
	I0731 18:44:29.452153  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 82/120
	I0731 18:44:30.453685  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 83/120
	I0731 18:44:31.455057  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 84/120
	I0731 18:44:32.456807  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 85/120
	I0731 18:44:33.458260  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 86/120
	I0731 18:44:34.459653  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 87/120
	I0731 18:44:35.461266  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 88/120
	I0731 18:44:36.462892  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 89/120
	I0731 18:44:37.465058  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 90/120
	I0731 18:44:38.466673  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 91/120
	I0731 18:44:39.468357  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 92/120
	I0731 18:44:40.469638  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 93/120
	I0731 18:44:41.471202  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 94/120
	I0731 18:44:42.472840  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 95/120
	I0731 18:44:43.474960  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 96/120
	I0731 18:44:44.476404  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 97/120
	I0731 18:44:45.477873  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 98/120
	I0731 18:44:46.479308  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 99/120
	I0731 18:44:47.481508  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 100/120
	I0731 18:44:48.483070  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 101/120
	I0731 18:44:49.484584  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 102/120
	I0731 18:44:50.486358  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 103/120
	I0731 18:44:51.487835  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 104/120
	I0731 18:44:52.489764  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 105/120
	I0731 18:44:53.491036  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 106/120
	I0731 18:44:54.492333  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 107/120
	I0731 18:44:55.493702  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 108/120
	I0731 18:44:56.495132  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 109/120
	I0731 18:44:57.496866  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 110/120
	I0731 18:44:58.499013  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 111/120
	I0731 18:44:59.500882  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 112/120
	I0731 18:45:00.502391  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 113/120
	I0731 18:45:01.503962  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 114/120
	I0731 18:45:02.505866  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 115/120
	I0731 18:45:03.507516  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 116/120
	I0731 18:45:04.509096  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 117/120
	I0731 18:45:05.510985  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 118/120
	I0731 18:45:06.512736  419780 main.go:141] libmachine: (ha-326651-m03) Waiting for machine to stop 119/120
	I0731 18:45:07.513428  419780 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 18:45:07.513511  419780 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 18:45:07.515907  419780 out.go:177] 
	W0731 18:45:07.517532  419780 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 18:45:07.517554  419780 out.go:239] * 
	* 
	W0731 18:45:07.520686  419780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:45:07.521996  419780 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-326651 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-326651 --wait=true -v=7 --alsologtostderr
E0731 18:45:32.744560  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:46:55.789658  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:48:48.017725  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-326651 --wait=true -v=7 --alsologtostderr: (4m5.000045969s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-326651
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326651 -n ha-326651
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-326651 logs -n 25: (1.927028199s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m04 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp testdata/cp-test.txt                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651:/home/docker/cp-test_ha-326651-m04_ha-326651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651 sudo cat                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03:/home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m03 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-326651 node stop m02 -v=7                                                     | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-326651 node start m02 -v=7                                                    | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-326651 -v=7                                                           | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-326651 -v=7                                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-326651 --wait=true -v=7                                                    | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:45 UTC | 31 Jul 24 18:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-326651                                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:49 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:45:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:45:07.569875  420284 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:45:07.570124  420284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:45:07.570132  420284 out.go:304] Setting ErrFile to fd 2...
	I0731 18:45:07.570136  420284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:45:07.570296  420284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:45:07.570852  420284 out.go:298] Setting JSON to false
	I0731 18:45:07.571852  420284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8851,"bootTime":1722442657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:45:07.571920  420284 start.go:139] virtualization: kvm guest
	I0731 18:45:07.574265  420284 out.go:177] * [ha-326651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:45:07.576024  420284 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:45:07.576045  420284 notify.go:220] Checking for updates...
	I0731 18:45:07.578992  420284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:45:07.580501  420284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:45:07.582001  420284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:45:07.583240  420284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:45:07.584577  420284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:45:07.586220  420284 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:45:07.586323  420284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:45:07.586738  420284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:45:07.586792  420284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:45:07.603523  420284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0731 18:45:07.604065  420284 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:45:07.604756  420284 main.go:141] libmachine: Using API Version  1
	I0731 18:45:07.604781  420284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:45:07.605211  420284 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:45:07.605415  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:45:07.641759  420284 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:45:07.643347  420284 start.go:297] selected driver: kvm2
	I0731 18:45:07.643373  420284 start.go:901] validating driver "kvm2" against &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:45:07.643585  420284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:45:07.644062  420284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:45:07.644151  420284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:45:07.660222  420284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:45:07.660950  420284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:45:07.660997  420284 cni.go:84] Creating CNI manager for ""
	I0731 18:45:07.661006  420284 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 18:45:07.661083  420284 start.go:340] cluster config:
	{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:45:07.661253  420284 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:45:07.664071  420284 out.go:177] * Starting "ha-326651" primary control-plane node in "ha-326651" cluster
	I0731 18:45:07.665438  420284 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:45:07.665502  420284 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:45:07.665515  420284 cache.go:56] Caching tarball of preloaded images
	I0731 18:45:07.665615  420284 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:45:07.665629  420284 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:45:07.665793  420284 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:45:07.666075  420284 start.go:360] acquireMachinesLock for ha-326651: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:45:07.666131  420284 start.go:364] duration metric: took 33.292µs to acquireMachinesLock for "ha-326651"
	I0731 18:45:07.666150  420284 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:45:07.666160  420284 fix.go:54] fixHost starting: 
	I0731 18:45:07.666462  420284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:45:07.666497  420284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:45:07.681525  420284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0731 18:45:07.681952  420284 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:45:07.682419  420284 main.go:141] libmachine: Using API Version  1
	I0731 18:45:07.682438  420284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:45:07.682771  420284 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:45:07.682984  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:45:07.683174  420284 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:45:07.684757  420284 fix.go:112] recreateIfNeeded on ha-326651: state=Running err=<nil>
	W0731 18:45:07.684786  420284 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:45:07.687627  420284 out.go:177] * Updating the running kvm2 "ha-326651" VM ...
	I0731 18:45:07.689019  420284 machine.go:94] provisionDockerMachine start ...
	I0731 18:45:07.689046  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:45:07.689245  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:07.692042  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.692623  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:07.692659  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.692807  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:07.693054  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.693217  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.693343  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:07.693551  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:07.693758  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:07.693770  420284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:45:07.806460  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651
	
	I0731 18:45:07.806492  420284 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:45:07.806744  420284 buildroot.go:166] provisioning hostname "ha-326651"
	I0731 18:45:07.806771  420284 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:45:07.806993  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:07.810013  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.810406  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:07.810430  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.810633  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:07.810819  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.810980  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.811099  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:07.811266  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:07.811445  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:07.811456  420284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651 && echo "ha-326651" | sudo tee /etc/hostname
	I0731 18:45:07.939830  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651
	
	I0731 18:45:07.939868  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:07.943013  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.943378  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:07.943402  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.943614  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:07.943829  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.944013  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.944197  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:07.944427  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:07.944662  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:07.944694  420284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:45:08.057766  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:45:08.057798  420284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:45:08.057849  420284 buildroot.go:174] setting up certificates
	I0731 18:45:08.057864  420284 provision.go:84] configureAuth start
	I0731 18:45:08.057881  420284 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:45:08.058170  420284 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:45:08.061104  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.061504  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.061534  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.061718  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:08.064180  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.064592  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.064618  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.064764  420284 provision.go:143] copyHostCerts
	I0731 18:45:08.064809  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:45:08.064867  420284 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:45:08.064877  420284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:45:08.064962  420284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:45:08.065102  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:45:08.065140  420284 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:45:08.065145  420284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:45:08.065185  420284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:45:08.065275  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:45:08.065300  420284 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:45:08.065309  420284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:45:08.065346  420284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:45:08.065438  420284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651 san=[127.0.0.1 192.168.39.220 ha-326651 localhost minikube]
	I0731 18:45:08.369389  420284 provision.go:177] copyRemoteCerts
	I0731 18:45:08.369459  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:45:08.369486  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:08.372569  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.372948  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.372984  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.373155  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:08.373479  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:08.373656  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:08.373815  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:45:08.459621  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:45:08.459710  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:45:08.489737  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:45:08.489806  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 18:45:08.516509  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:45:08.516599  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:45:08.541445  420284 provision.go:87] duration metric: took 483.565088ms to configureAuth
	I0731 18:45:08.541484  420284 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:45:08.541704  420284 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:45:08.541776  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:08.544396  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.544803  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.544835  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.544992  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:08.545203  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:08.545342  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:08.545514  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:08.545756  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:08.545992  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:08.546017  420284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:46:39.526631  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:46:39.526663  420284 machine.go:97] duration metric: took 1m31.837624181s to provisionDockerMachine
	I0731 18:46:39.526678  420284 start.go:293] postStartSetup for "ha-326651" (driver="kvm2")
	I0731 18:46:39.526690  420284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:46:39.526710  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.527088  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:46:39.527122  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.530742  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.531182  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.531210  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.531397  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.531608  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.531887  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.532046  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:46:39.617742  420284 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:46:39.622777  420284 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:46:39.622814  420284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:46:39.622903  420284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:46:39.623058  420284 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:46:39.623079  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:46:39.623225  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:46:39.633805  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:46:39.660049  420284 start.go:296] duration metric: took 133.352181ms for postStartSetup
	I0731 18:46:39.660123  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.660484  420284 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 18:46:39.660516  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.663590  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.664174  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.664204  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.664369  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.664598  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.664789  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.664958  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	W0731 18:46:39.748458  420284 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 18:46:39.748488  420284 fix.go:56] duration metric: took 1m32.082329042s for fixHost
	I0731 18:46:39.748517  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.751306  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.751738  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.751768  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.751923  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.752202  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.752413  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.752550  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.752728  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:46:39.752933  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:46:39.752946  420284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:46:39.862986  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722451599.826671641
	
	I0731 18:46:39.863011  420284 fix.go:216] guest clock: 1722451599.826671641
	I0731 18:46:39.863020  420284 fix.go:229] Guest: 2024-07-31 18:46:39.826671641 +0000 UTC Remote: 2024-07-31 18:46:39.74849775 +0000 UTC m=+92.215740015 (delta=78.173891ms)
	I0731 18:46:39.863046  420284 fix.go:200] guest clock delta is within tolerance: 78.173891ms
	I0731 18:46:39.863054  420284 start.go:83] releasing machines lock for "ha-326651", held for 1m32.19691177s
	I0731 18:46:39.863081  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.863372  420284 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:46:39.865850  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.866243  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.866274  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.866392  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.866899  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.867075  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.867161  420284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:46:39.867202  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.867303  420284 ssh_runner.go:195] Run: cat /version.json
	I0731 18:46:39.867320  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.870223  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.870360  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.870619  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.870647  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.870834  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.870842  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.870866  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.871042  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.871068  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.871227  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.871246  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.871459  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.871450  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:46:39.871595  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:46:39.950297  420284 ssh_runner.go:195] Run: systemctl --version
	I0731 18:46:39.974393  420284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:46:40.140781  420284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:46:40.147030  420284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:46:40.147153  420284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:46:40.158842  420284 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 18:46:40.158869  420284 start.go:495] detecting cgroup driver to use...
	I0731 18:46:40.158936  420284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:46:40.181327  420284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:46:40.198423  420284 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:46:40.198506  420284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:46:40.214757  420284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:46:40.231415  420284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:46:40.417287  420284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:46:40.580540  420284 docker.go:233] disabling docker service ...
	I0731 18:46:40.580623  420284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:46:40.597414  420284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:46:40.611142  420284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:46:40.760442  420284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:46:40.910810  420284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:46:40.924560  420284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:46:40.944429  420284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:46:40.944506  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.955406  420284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:46:40.955489  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.965730  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.976562  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.986790  420284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:46:40.997311  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:41.007580  420284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:41.019068  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:41.029644  420284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:46:41.039109  420284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:46:41.048174  420284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:46:41.200673  420284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:46:41.493434  420284 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:46:41.493544  420284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:46:41.498615  420284 start.go:563] Will wait 60s for crictl version
	I0731 18:46:41.498681  420284 ssh_runner.go:195] Run: which crictl
	I0731 18:46:41.502858  420284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:46:41.545456  420284 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:46:41.545554  420284 ssh_runner.go:195] Run: crio --version
	I0731 18:46:41.576407  420284 ssh_runner.go:195] Run: crio --version
	I0731 18:46:41.606904  420284 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:46:41.608103  420284 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:46:41.610560  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:41.610919  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:41.610941  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:41.611115  420284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:46:41.616019  420284 kubeadm.go:883] updating cluster {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:46:41.616195  420284 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:46:41.616243  420284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:46:41.659725  420284 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:46:41.659752  420284 crio.go:433] Images already preloaded, skipping extraction
	I0731 18:46:41.659806  420284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:46:41.693988  420284 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:46:41.694020  420284 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:46:41.694034  420284 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.30.3 crio true true} ...
	I0731 18:46:41.694186  420284 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:46:41.694270  420284 ssh_runner.go:195] Run: crio config
	I0731 18:46:41.742178  420284 cni.go:84] Creating CNI manager for ""
	I0731 18:46:41.742204  420284 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 18:46:41.742217  420284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:46:41.742253  420284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326651 NodeName:ha-326651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:46:41.742410  420284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-326651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:46:41.742434  420284 kube-vip.go:115] generating kube-vip config ...
	I0731 18:46:41.742478  420284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:46:41.754646  420284 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:46:41.754780  420284 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:46:41.754863  420284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:46:41.764820  420284 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:46:41.764931  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 18:46:41.774884  420284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 18:46:41.792614  420284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:46:41.809676  420284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 18:46:41.826698  420284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 18:46:41.843934  420284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:46:41.848768  420284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:46:41.998053  420284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:46:42.013081  420284 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.220
	I0731 18:46:42.013111  420284 certs.go:194] generating shared ca certs ...
	I0731 18:46:42.013133  420284 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:46:42.013313  420284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:46:42.013354  420284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:46:42.013373  420284 certs.go:256] generating profile certs ...
	I0731 18:46:42.013459  420284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:46:42.013489  420284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea
	I0731 18:46:42.013504  420284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.202 192.168.39.50 192.168.39.254]
	I0731 18:46:42.278192  420284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea ...
	I0731 18:46:42.278230  420284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea: {Name:mk8a03ec5a011b43a140ef68f41313daf1725207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:46:42.278408  420284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea ...
	I0731 18:46:42.278420  420284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea: {Name:mke80c81b9fe18fc276220acecd36fb9cd9a551d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:46:42.278501  420284 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:46:42.278673  420284 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:46:42.278931  420284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:46:42.278973  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:46:42.279011  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:46:42.279022  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:46:42.279033  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:46:42.279042  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:46:42.279067  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:46:42.279088  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:46:42.279101  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:46:42.279178  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:46:42.279216  420284 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:46:42.279226  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:46:42.279251  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:46:42.279557  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:46:42.279653  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:46:42.279721  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:46:42.279759  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.279784  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.279798  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.281193  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:46:42.307184  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:46:42.331020  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:46:42.355958  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:46:42.380013  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:46:42.403692  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:46:42.427924  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:46:42.452888  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:46:42.477510  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:46:42.500911  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:46:42.524138  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:46:42.548231  420284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:46:42.564934  420284 ssh_runner.go:195] Run: openssl version
	I0731 18:46:42.571140  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:46:42.582942  420284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.588005  420284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.588064  420284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.593880  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:46:42.603818  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:46:42.614851  420284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.619304  420284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.619360  420284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.624978  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:46:42.634789  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:46:42.646401  420284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.651377  420284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.651429  420284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.657107  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:46:42.701068  420284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:46:42.757161  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:46:42.804167  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:46:42.820725  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:46:42.842033  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:46:42.893202  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:46:43.019732  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:46:43.129593  420284 kubeadm.go:392] StartCluster: {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:46:43.129762  420284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:46:43.129824  420284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:46:43.326870  420284 cri.go:89] found id: "dda547983c3f5ddc08ff732c5926c9248e3084f3d726fca7f7ab4dc9501a9607"
	I0731 18:46:43.326902  420284 cri.go:89] found id: "d9d475fbcc274fb6c41cdc47837ca1e6a4aa1bcbd7ff3232a10ce55abffa5abb"
	I0731 18:46:43.326908  420284 cri.go:89] found id: "6ca2ae238d32b3d2db46923dad6103c3f22adb55d82c29497b299bc93d5d44ea"
	I0731 18:46:43.326912  420284 cri.go:89] found id: "68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33"
	I0731 18:46:43.326917  420284 cri.go:89] found id: "36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7"
	I0731 18:46:43.326921  420284 cri.go:89] found id: "81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821"
	I0731 18:46:43.326926  420284 cri.go:89] found id: "5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd"
	I0731 18:46:43.326930  420284 cri.go:89] found id: "c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd"
	I0731 18:46:43.326936  420284 cri.go:89] found id: "44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c"
	I0731 18:46:43.326947  420284 cri.go:89] found id: "bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a"
	I0731 18:46:43.326951  420284 cri.go:89] found id: ""
	I0731 18:46:43.327014  420284 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.292501260Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451753292469503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87cd6abf-dce1-4fc2-83a6-540a2ca1ff70 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.307600232Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98b7b4c9-99d0-4d58-aeea-06a35a03e4d3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.307720105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98b7b4c9-99d0-4d58-aeea-06a35a03e4d3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.308284853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98b7b4c9-99d0-4d58-aeea-06a35a03e4d3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.362847171Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8f9a1ba-4a06-4f61-8cc9-7af9593912c3 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.362937489Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8f9a1ba-4a06-4f61-8cc9-7af9593912c3 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.364044812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae9a96ac-5712-4562-ae8b-02e4cd3e187f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.364600776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451753364578184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae9a96ac-5712-4562-ae8b-02e4cd3e187f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.365406729Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25fb55a5-bd21-4be3-a29b-f46779d78b3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.365478903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25fb55a5-bd21-4be3-a29b-f46779d78b3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.366008386Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25fb55a5-bd21-4be3-a29b-f46779d78b3c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.419223663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1efae5b-fe0f-4da2-93a1-20081ccd98d7 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.419331279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1efae5b-fe0f-4da2-93a1-20081ccd98d7 name=/runtime.v1.RuntimeService/Version
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.420804433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae02de37-dcf7-4360-8557-1f73ba9e5821 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.421452886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451753421426456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae02de37-dcf7-4360-8557-1f73ba9e5821 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.421967594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33b1deb1-a7aa-4cb1-93cf-402b2725d896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.422040398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33b1deb1-a7aa-4cb1-93cf-402b2725d896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.422498338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33b1deb1-a7aa-4cb1-93cf-402b2725d896 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.470586600Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a19c9817-454f-4996-8953-13c5f2c2a6cb name=/runtime.v1.RuntimeService/Version
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.470683516Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a19c9817-454f-4996-8953-13c5f2c2a6cb name=/runtime.v1.RuntimeService/Version
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.472058626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2314a488-f827-467f-858a-f3cc3f90e319 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.472673129Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451753472646473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2314a488-f827-467f-858a-f3cc3f90e319 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.473339940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=256c2298-c4a6-495b-9102-c9b8a0c36ce2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.473421057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=256c2298-c4a6-495b-9102-c9b8a0c36ce2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:49:13 ha-326651 crio[3766]: time="2024-07-31 18:49:13.473834018Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=256c2298-c4a6-495b-9102-c9b8a0c36ce2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f1fbb83e91bf2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   40023062c6e42       storage-provisioner
	02116477a1866       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   5218a1916fc18       kube-controller-manager-ha-326651
	74114b61ea048       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   0c86ce87b523b       kube-apiserver-ha-326651
	f2cbf6604849f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   40023062c6e42       storage-provisioner
	e23352feabf79       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   874e1611bd497       busybox-fc5497c4f-mknlp
	d7f9d0f3ec264       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   98c73fb71d53a       kube-vip-ha-326651
	d9d3afedcdd25       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e083fb2256d90       coredns-7db6d8ff4d-hsr7k
	35b19bba2ba5e       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   2307a0c144e9e       kindnet-n7q8p
	68867a78c6b36       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   414d0bfd7c538       kube-proxy-hg6sj
	d626a6c1307f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   e9ad7728363b0       coredns-7db6d8ff4d-p2tfn
	c3968b33a3882       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   5218a1916fc18       kube-controller-manager-ha-326651
	64795d240e81b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   0c86ce87b523b       kube-apiserver-ha-326651
	909924541d869       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   e86963e2fc3dc       etcd-ha-326651
	b98405b29355d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   b32cd5b71cb09       kube-scheduler-ha-326651
	f413f75c91415       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   25be6f24676d4       busybox-fc5497c4f-mknlp
	68c50c65ea238       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   d651e4190c72a       coredns-7db6d8ff4d-hsr7k
	36f0c9b04bb2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   8a4d6fb11ec09       coredns-7db6d8ff4d-p2tfn
	81362a0e08184       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   8783b79032fde       kindnet-n7q8p
	5abc9372bd5fd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   4ed8613feb5ec       kube-proxy-hg6sj
	c40e9679adc35       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   4bc17ce1c9d2f       kube-scheduler-ha-326651
	bd3d8dbedb96a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   1e765f5d9b3b0       etcd-ha-326651
	
	
	==> coredns [36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7] <==
	[INFO] 10.244.0.4:43466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105451s
	[INFO] 10.244.0.4:43878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152423s
	[INFO] 10.244.0.4:49227 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079008s
	[INFO] 10.244.0.4:47339 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074836s
	[INFO] 10.244.0.4:60002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056953s
	[INFO] 10.244.1.2:60772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013788s
	[INFO] 10.244.1.2:34997 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091978s
	[INFO] 10.244.2.2:48501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137292s
	[INFO] 10.244.2.2:41701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113322s
	[INFO] 10.244.2.2:46841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192541s
	[INFO] 10.244.2.2:37979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066316s
	[INFO] 10.244.0.4:41261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093714s
	[INFO] 10.244.0.4:56128 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073138s
	[INFO] 10.244.1.2:60703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131127s
	[INFO] 10.244.1.2:47436 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239598s
	[INFO] 10.244.1.2:57459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181068s
	[INFO] 10.244.2.2:56898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174969s
	[INFO] 10.244.2.2:33868 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108451s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1968&timeout=9m34s&timeoutSeconds=574&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33] <==
	[INFO] 10.244.2.2:52172 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280659s
	[INFO] 10.244.2.2:43370 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363635s
	[INFO] 10.244.2.2:52527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117452s
	[INFO] 10.244.2.2:48596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117278s
	[INFO] 10.244.0.4:55816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001992063s
	[INFO] 10.244.0.4:33045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291238s
	[INFO] 10.244.0.4:37880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043099s
	[INFO] 10.244.1.2:40143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128845s
	[INFO] 10.244.1.2:48970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131569s
	[INFO] 10.244.0.4:57102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075402s
	[INFO] 10.244.0.4:54508 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004372s
	[INFO] 10.244.1.2:37053 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000194922s
	[INFO] 10.244.2.2:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129881s
	[INFO] 10.244.2.2:48437 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148815s
	[INFO] 10.244.0.4:50060 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094079s
	[INFO] 10.244.0.4:42736 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105289s
	[INFO] 10.244.0.4:43280 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052254s
	[INFO] 10.244.0.4:47658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074002s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1968&timeout=9m45s&timeoutSeconds=585&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1442526784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:52.056) (total time: 10001ms):
	Trace[1442526784]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:47:02.058)
	Trace[1442526784]: [10.001645878s] [10.001645878s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52146->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1971090037]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:54.856) (total time: 13784ms):
	Trace[1971090037]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52146->10.96.0.1:443: read: connection reset by peer 13784ms (18:47:08.640)
	Trace[1971090037]: [13.784174422s] [13.784174422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52146->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5] <==
	[INFO] plugin/kubernetes: Trace[207031835]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:58.051) (total time: 10589ms):
	Trace[207031835]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47364->10.96.0.1:443: read: connection reset by peer 10589ms (18:47:08.641)
	Trace[207031835]: [10.589431725s] [10.589431725s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47364->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46052->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[925302837]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:55.225) (total time: 13415ms):
	Trace[925302837]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46052->10.96.0.1:443: read: connection reset by peer 13415ms (18:47:08.641)
	Trace[925302837]: [13.415917102s] [13.415917102s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46052->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-326651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_35_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:49:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-326651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 419482855e6c4b5d814fd4a3e9e4847f
	  System UUID:                41948285-5e6c-4b5d-814f-d4a3e9e4847f
	  Boot ID:                    87f7122f-f0c1-4fc2-964d-0fcb352e2937
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mknlp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-hsr7k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-p2tfn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-326651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-n7q8p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-326651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-326651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-hg6sj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-326651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-326651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 109s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-326651 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-326651 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-326651 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-326651 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Warning  ContainerGCFailed        2m53s (x2 over 3m53s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           91s                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   RegisteredNode           90s                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	
	
	Name:               ha-326651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_36_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:49:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-326651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e6699cde3924aaf94b25ab366c2acb8
	  System UUID:                2e6699cd-e392-4aaf-94b2-5ab366c2acb8
	  Boot ID:                    f51ac4b7-b2a2-46f0-bb97-f1e2b5e5d270
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cs6t8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-326651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7l9l7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-326651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-326651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-stqb2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-326651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-326651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-326651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-326651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-326651-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  NodeNotReady             9m                   node-controller  Node ha-326651-m02 status is now: NodeNotReady
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node ha-326651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           91s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           34s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	
	
	Name:               ha-326651-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_37_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:37:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:49:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:48:44 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:48:44 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:48:44 +0000   Wed, 31 Jul 2024 18:37:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:48:44 +0000   Wed, 31 Jul 2024 18:38:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.50
	  Hostname:    ha-326651-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b5e4f78408f84c3ebbac53526a1e33d5
	  System UUID:                b5e4f784-08f8-4c3e-bbac-53526a1e33d5
	  Boot ID:                    54570bd3-dc01-4805-bd5a-c7077d43347f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lgg6t                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-326651-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-86n7r                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-326651-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-326651-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-lhprb                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-326651-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-326651-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-326651-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-326651-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-326651-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal   RegisteredNode           92s                node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node ha-326651-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node ha-326651-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node ha-326651-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 61s                kubelet          Node ha-326651-m03 has been rebooted, boot id: 54570bd3-dc01-4805-bd5a-c7077d43347f
	  Normal   RegisteredNode           34s                node-controller  Node ha-326651-m03 event: Registered Node ha-326651-m03 in Controller
	
	
	Name:               ha-326651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:38:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:49:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:49:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:49:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:49:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:49:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-326651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbaa436975294cf08fb310ae9ef7d64d
	  System UUID:                cbaa4369-7529-4cf0-8fb3-10ae9ef7d64d
	  Boot ID:                    0727a61c-d910-40e1-b47d-ff631c7b025c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-nmwh7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-2nq9j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-326651-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   NodeReady                9m57s              kubelet          Node ha-326651-m04 status is now: NodeReady
	  Normal   RegisteredNode           92s                node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   NodeNotReady             52s                node-controller  Node ha-326651-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-326651-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-326651-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-326651-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-326651-m04 has been rebooted, boot id: 0727a61c-d910-40e1-b47d-ff631c7b025c
	  Normal   NodeReady                9s                 kubelet          Node ha-326651-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:35] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.063136] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063799] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.163467] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.151948] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.299453] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.312604] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.062376] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.195979] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +1.049374] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.105366] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.092707] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.338531] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.117589] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 18:36] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 18:46] systemd-fstab-generator[3685]: Ignoring "noauto" option for root device
	[  +0.180536] systemd-fstab-generator[3697]: Ignoring "noauto" option for root device
	[  +0.181034] systemd-fstab-generator[3711]: Ignoring "noauto" option for root device
	[  +0.150445] systemd-fstab-generator[3723]: Ignoring "noauto" option for root device
	[  +0.281151] systemd-fstab-generator[3751]: Ignoring "noauto" option for root device
	[  +0.802477] systemd-fstab-generator[3852]: Ignoring "noauto" option for root device
	[ +13.298730] kauditd_printk_skb: 217 callbacks suppressed
	[Jul31 18:47] kauditd_printk_skb: 1 callbacks suppressed
	[ +19.161472] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b] <==
	{"level":"warn","ts":"2024-07-31T18:48:10.17249Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.50:2380/version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:10.172576Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:14.174585Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.50:2380/version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:14.174637Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:14.404462Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8b1e75ba5ca8c5e","rtt":"0s","error":"dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:14.404647Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8b1e75ba5ca8c5e","rtt":"0s","error":"dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:18.176755Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.50:2380/version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:18.176807Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:19.405466Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b8b1e75ba5ca8c5e","rtt":"0s","error":"dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:19.405683Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b8b1e75ba5ca8c5e","rtt":"0s","error":"dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:22.17911Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.50:2380/version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T18:48:22.179267Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"b8b1e75ba5ca8c5e","error":"Get \"https://192.168.39.50:2380/version\": dial tcp 192.168.39.50:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-31T18:48:23.834015Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:23.855404Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9bf1b68912964415","to":"b8b1e75ba5ca8c5e","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T18:48:23.855471Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:23.856988Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"9bf1b68912964415","to":"b8b1e75ba5ca8c5e","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T18:48:23.857039Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:23.857283Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:23.857887Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:24.490243Z","caller":"traceutil/trace.go:171","msg":"trace[495671616] linearizableReadLoop","detail":"{readStateIndex:2809; appliedIndex:2809; }","duration":"140.808708ms","start":"2024-07-31T18:48:24.349393Z","end":"2024-07-31T18:48:24.490202Z","steps":["trace[495671616] 'read index received'  (duration: 140.735902ms)","trace[495671616] 'applied index is now lower than readState.Index'  (duration: 70.983µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T18:48:24.490445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.996454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T18:48:24.49058Z","caller":"traceutil/trace.go:171","msg":"trace[1656594907] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2400; }","duration":"141.248448ms","start":"2024-07-31T18:48:24.349314Z","end":"2024-07-31T18:48:24.490562Z","steps":["trace[1656594907] 'agreement among raft nodes before linearized reading'  (duration: 141.016508ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:48:24.490873Z","caller":"traceutil/trace.go:171","msg":"trace[1306060640] transaction","detail":"{read_only:false; response_revision:2401; number_of_response:1; }","duration":"167.55476ms","start":"2024-07-31T18:48:24.323303Z","end":"2024-07-31T18:48:24.490858Z","steps":["trace[1306060640] 'process raft request'  (duration: 167.373182ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:48:32.152573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.68221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-326651-m03\" ","response":"range_response_count:1 size:6884"}
	{"level":"info","ts":"2024-07-31T18:48:32.152655Z","caller":"traceutil/trace.go:171","msg":"trace[169461020] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-326651-m03; range_end:; response_count:1; response_revision:2443; }","duration":"131.76851ms","start":"2024-07-31T18:48:32.020866Z","end":"2024-07-31T18:48:32.152635Z","steps":["trace[169461020] 'range keys from in-memory index tree'  (duration: 130.567299ms)"],"step_count":1}
	
	
	==> etcd [bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a] <==
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T18:45:08.688894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"629.678482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-31T18:45:08.697545Z","caller":"traceutil/trace.go:171","msg":"trace[1853778956] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"638.32401ms","start":"2024-07-31T18:45:08.059211Z","end":"2024-07-31T18:45:08.697535Z","steps":["trace[1853778956] 'agreement among raft nodes before linearized reading'  (duration: 629.67842ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:45:08.697614Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:45:08.059193Z","time spent":"638.408518ms","remote":"127.0.0.1:58462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-31T18:45:08.75305Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"9bf1b68912964415","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T18:45:08.75333Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753369Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753482Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753561Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753594Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753623Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.75363Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753645Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753742Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753786Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753853Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753866Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.756511Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.220:2380"}
	{"level":"info","ts":"2024-07-31T18:45:08.756672Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.220:2380"}
	{"level":"info","ts":"2024-07-31T18:45:08.756714Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-326651","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.220:2380"],"advertise-client-urls":["https://192.168.39.220:2379"]}
	
	
	==> kernel <==
	 18:49:14 up 14 min,  0 users,  load average: 0.44, 0.43, 0.29
	Linux ha-326651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60] <==
	I0731 18:48:34.865859       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:48:44.864121       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:48:44.864283       1 main.go:299] handling current node
	I0731 18:48:44.864313       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:48:44.864331       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:48:44.864471       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:48:44.864494       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:48:44.864569       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:48:44.864596       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:48:54.867670       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:48:54.867718       1 main.go:299] handling current node
	I0731 18:48:54.867735       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:48:54.867740       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:48:54.867937       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:48:54.867963       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:48:54.868028       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:48:54.868049       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:49:04.863899       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:49:04.864043       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:49:04.864349       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:49:04.864421       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:49:04.864533       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:49:04.864567       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:49:04.864704       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:49:04.864749       1 main.go:299] handling current node
	
	
	==> kindnet [81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821] <==
	I0731 18:44:39.756611       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:44:39.756631       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:44:39.756802       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:44:39.756835       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:44:39.756902       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:44:39.756919       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:44:49.760225       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:44:49.760398       1 main.go:299] handling current node
	I0731 18:44:49.760465       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:44:49.760521       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:44:49.760707       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:44:49.760759       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:44:49.760872       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:44:49.760915       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	E0731 18:44:57.441453       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1913&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0731 18:44:59.757045       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:44:59.757209       1 main.go:299] handling current node
	I0731 18:44:59.757245       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:44:59.757265       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:44:59.757429       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:44:59.757451       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:44:59.757543       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:44:59.757564       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	W0731 18:45:07.057554       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0731 18:45:07.057628       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kube-apiserver [64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a] <==
	I0731 18:46:44.183708       1 options.go:221] external host was not specified, using 192.168.39.220
	I0731 18:46:44.189288       1 server.go:148] Version: v1.30.3
	I0731 18:46:44.189473       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:46:44.601698       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 18:46:44.620794       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 18:46:44.629213       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 18:46:44.629283       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 18:46:44.629478       1 instance.go:299] Using reconciler: lease
	W0731 18:47:04.601351       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0731 18:47:04.601490       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0731 18:47:04.630676       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0731 18:47:04.630741       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff] <==
	I0731 18:47:29.975618       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0731 18:47:29.975628       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0731 18:47:30.047782       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 18:47:30.052775       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 18:47:30.056076       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 18:47:30.056401       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 18:47:30.056316       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 18:47:30.056356       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 18:47:30.056375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 18:47:30.067648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 18:47:30.070306       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 18:47:30.070389       1 policy_source.go:224] refreshing policies
	I0731 18:47:30.075645       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 18:47:30.075688       1 aggregator.go:165] initial CRD sync complete...
	I0731 18:47:30.075701       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 18:47:30.075707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 18:47:30.075712       1 cache.go:39] Caches are synced for autoregister controller
	W0731 18:47:30.077919       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.50]
	I0731 18:47:30.081631       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 18:47:30.095574       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 18:47:30.102552       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 18:47:30.153468       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 18:47:30.962903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 18:47:31.321227       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.50]
	W0731 18:47:51.325371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.220]
	
	
	==> kube-controller-manager [02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0] <==
	I0731 18:47:42.524821       1 shared_informer.go:320] Caches are synced for namespace
	I0731 18:47:42.526309       1 shared_informer.go:320] Caches are synced for GC
	I0731 18:47:42.536611       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 18:47:42.561710       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 18:47:42.640515       1 shared_informer.go:320] Caches are synced for cronjob
	I0731 18:47:42.685232       1 shared_informer.go:320] Caches are synced for disruption
	I0731 18:47:42.696371       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 18:47:42.734525       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 18:47:42.840467       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326651"
	I0731 18:47:42.840582       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326651-m02"
	I0731 18:47:42.841300       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326651-m03"
	I0731 18:47:42.841457       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-326651-m04"
	I0731 18:47:42.844073       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 18:47:43.128485       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 18:47:43.177557       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 18:47:43.177660       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 18:47:46.091400       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.46737ms"
	I0731 18:47:46.091926       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.542µs"
	I0731 18:47:50.412969       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.868478ms"
	I0731 18:47:50.415739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="446.483µs"
	I0731 18:48:14.527327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.207714ms"
	I0731 18:48:14.527447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.853µs"
	I0731 18:48:31.935452       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.747048ms"
	I0731 18:48:31.935621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.351µs"
	I0731 18:49:05.158727       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-326651-m04"
	
	
	==> kube-controller-manager [c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85] <==
	I0731 18:46:44.989845       1 serving.go:380] Generated self-signed cert in-memory
	I0731 18:46:45.387605       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 18:46:45.388528       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:46:45.390314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 18:46:45.391063       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 18:46:45.391296       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 18:46:45.391400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0731 18:47:05.637211       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.220:8443/healthz\": dial tcp 192.168.39.220:8443: connect: connection refused"
	
	
	==> kube-proxy [5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd] <==
	E0731 18:44:05.155212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:08.226547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:08.226648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:08.226674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:08.226787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:08.226850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:08.226794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:14.368930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:14.369234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:14.369483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:14.369578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:14.369789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:14.369907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:23.584585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:23.585009       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:23.585172       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:23.585206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:26.657214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:26.657326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:38.945380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:38.945604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:42.017441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:42.017871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:45.090227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:45.090587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d] <==
	I0731 18:47:24.876688       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 18:47:24.877624       1 server.go:872] "Version info" version="v1.30.3"
	I0731 18:47:24.877661       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:47:24.879401       1 config.go:192] "Starting service config controller"
	I0731 18:47:24.879444       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:47:24.879474       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:47:24.879494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:47:24.880191       1 config.go:319] "Starting node config controller"
	I0731 18:47:24.880217       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0731 18:47:27.905444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.905638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.906200       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0731 18:47:27.906178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.906489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:27.906273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.906741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:30.977543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:30.977719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:30.979116       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:30.977800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:30.979325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:30.977883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 18:47:32.779851       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:47:32.881427       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:47:33.080063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094] <==
	W0731 18:47:22.519816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.220:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:22.519917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.220:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:22.708918       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.220:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:22.709030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.220:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:22.818245       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.220:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:22.818403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.220:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:23.361683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.220:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:23.361812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.220:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:23.737699       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.220:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:23.737774       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.220:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:24.304483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.220:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:24.304554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.220:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:29.986040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:47:29.986095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 18:47:29.986250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 18:47:29.986285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 18:47:29.986573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:47:29.986620       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:47:29.986715       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 18:47:29.986749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 18:47:29.986819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:47:29.989264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:47:29.991985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:47:29.992078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 18:47:44.046326       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd] <==
	W0731 18:45:05.352085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 18:45:05.352191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 18:45:05.936509       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:45:05.936661       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:45:06.308794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 18:45:06.308901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 18:45:06.514414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 18:45:06.514509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 18:45:06.652032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:06.652230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 18:45:06.663388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:45:06.663496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:45:06.744580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:06.744633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 18:45:06.913051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 18:45:06.913108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 18:45:07.121232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:45:07.121344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:45:07.139615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 18:45:07.139647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 18:45:07.284416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:45:07.284469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:45:07.868544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:07.868588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:08.662471       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 18:47:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:47:21 ha-326651 kubelet[1381]: W0731 18:47:21.760696    1381 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 18:47:21 ha-326651 kubelet[1381]: E0731 18:47:21.760831    1381 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=1903": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 18:47:21 ha-326651 kubelet[1381]: I0731 18:47:21.760921    1381 status_manager.go:853] "Failed to get status for pod" podUID="40cf0ce9-4b32-45fb-adef-577d742e433a" pod="kube-system/kube-proxy-hg6sj" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg6sj\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 18:47:21 ha-326651 kubelet[1381]: E0731 18:47:21.760675    1381 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-326651.17e7606aa13138ba  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-326651,UID:a1314e538bc1ac5bc50f9e801bfd0998,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-326651,},FirstTimestamp:2024-07-31 18:43:13.579661498 +0000 UTC m=+473.438924834,LastTimestamp:2024-07-31 18:43:13.579661498 +0000 UTC m=+473.438924834,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-326651,}"
	Jul 31 18:47:22 ha-326651 kubelet[1381]: I0731 18:47:22.033670    1381 scope.go:117] "RemoveContainer" containerID="f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5"
	Jul 31 18:47:22 ha-326651 kubelet[1381]: E0731 18:47:22.033865    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(83869540-accb-4a58-b094-6bdc6b4c1944)\"" pod="kube-system/storage-provisioner" podUID="83869540-accb-4a58-b094-6bdc6b4c1944"
	Jul 31 18:47:24 ha-326651 kubelet[1381]: I0731 18:47:24.314595    1381 scope.go:117] "RemoveContainer" containerID="64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a"
	Jul 31 18:47:24 ha-326651 kubelet[1381]: I0731 18:47:24.832654    1381 status_manager.go:853] "Failed to get status for pod" podUID="0c72faff8a00fedc572d381491b77ea1" pod="kube-system/etcd-ha-326651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-326651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 18:47:24 ha-326651 kubelet[1381]: E0731 18:47:24.832695    1381 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-326651?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 31 18:47:25 ha-326651 kubelet[1381]: I0731 18:47:25.313974    1381 scope.go:117] "RemoveContainer" containerID="c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85"
	Jul 31 18:47:27 ha-326651 kubelet[1381]: I0731 18:47:27.904712    1381 status_manager.go:853] "Failed to get status for pod" podUID="a1314e538bc1ac5bc50f9e801bfd0998" pod="kube-system/kube-apiserver-ha-326651" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-326651\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 31 18:47:33 ha-326651 kubelet[1381]: I0731 18:47:33.314348    1381 scope.go:117] "RemoveContainer" containerID="f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5"
	Jul 31 18:47:33 ha-326651 kubelet[1381]: E0731 18:47:33.314591    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(83869540-accb-4a58-b094-6bdc6b4c1944)\"" pod="kube-system/storage-provisioner" podUID="83869540-accb-4a58-b094-6bdc6b4c1944"
	Jul 31 18:47:46 ha-326651 kubelet[1381]: I0731 18:47:46.314077    1381 scope.go:117] "RemoveContainer" containerID="f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5"
	Jul 31 18:47:46 ha-326651 kubelet[1381]: E0731 18:47:46.314302    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(83869540-accb-4a58-b094-6bdc6b4c1944)\"" pod="kube-system/storage-provisioner" podUID="83869540-accb-4a58-b094-6bdc6b4c1944"
	Jul 31 18:48:01 ha-326651 kubelet[1381]: I0731 18:48:01.314264    1381 scope.go:117] "RemoveContainer" containerID="f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5"
	Jul 31 18:48:02 ha-326651 kubelet[1381]: I0731 18:48:02.026522    1381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-mknlp" podStartSLOduration=582.448537193 podStartE2EDuration="9m45.026486187s" podCreationTimestamp="2024-07-31 18:38:17 +0000 UTC" firstStartedPulling="2024-07-31 18:38:17.621941084 +0000 UTC m=+177.481204405" lastFinishedPulling="2024-07-31 18:38:20.199890073 +0000 UTC m=+180.059153399" observedRunningTime="2024-07-31 18:38:21.104477981 +0000 UTC m=+180.963741322" watchObservedRunningTime="2024-07-31 18:48:02.026486187 +0000 UTC m=+761.885749528"
	Jul 31 18:48:20 ha-326651 kubelet[1381]: E0731 18:48:20.363538    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:48:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:48:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:48:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:48:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:48:24 ha-326651 kubelet[1381]: I0731 18:48:24.315212    1381 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-326651" podUID="55d22288-ccee-4e17-95b6-4a96e86fca09"
	Jul 31 18:48:24 ha-326651 kubelet[1381]: I0731 18:48:24.519209    1381 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-326651"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:49:12.945876  421633 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19356-395032/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326651 -n ha-326651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-326651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 stop -v=7 --alsologtostderr
E0731 18:50:32.741739  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 stop -v=7 --alsologtostderr: exit status 82 (2m0.493418706s)

                                                
                                                
-- stdout --
	* Stopping node "ha-326651-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:49:32.872245  422046 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:49:32.872398  422046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:49:32.872410  422046 out.go:304] Setting ErrFile to fd 2...
	I0731 18:49:32.872416  422046 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:49:32.872577  422046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:49:32.872817  422046 out.go:298] Setting JSON to false
	I0731 18:49:32.872902  422046 mustload.go:65] Loading cluster: ha-326651
	I0731 18:49:32.873258  422046 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:49:32.873340  422046 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:49:32.873515  422046 mustload.go:65] Loading cluster: ha-326651
	I0731 18:49:32.873644  422046 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:49:32.873681  422046 stop.go:39] StopHost: ha-326651-m04
	I0731 18:49:32.874056  422046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:49:32.874117  422046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:49:32.889753  422046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0731 18:49:32.890270  422046 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:49:32.890845  422046 main.go:141] libmachine: Using API Version  1
	I0731 18:49:32.890878  422046 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:49:32.891236  422046 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:49:32.893735  422046 out.go:177] * Stopping node "ha-326651-m04"  ...
	I0731 18:49:32.895150  422046 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 18:49:32.895186  422046 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:49:32.895482  422046 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 18:49:32.895523  422046 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:49:32.898677  422046 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:49:32.899244  422046 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:48:59 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:49:32.899280  422046 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:49:32.899455  422046 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:49:32.899640  422046 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:49:32.899802  422046 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:49:32.899970  422046 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	I0731 18:49:32.988766  422046 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 18:49:33.044102  422046 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 18:49:33.102208  422046 main.go:141] libmachine: Stopping "ha-326651-m04"...
	I0731 18:49:33.102250  422046 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:49:33.103801  422046 main.go:141] libmachine: (ha-326651-m04) Calling .Stop
	I0731 18:49:33.107298  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 0/120
	I0731 18:49:34.108992  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 1/120
	I0731 18:49:35.110882  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 2/120
	I0731 18:49:36.112283  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 3/120
	I0731 18:49:37.113796  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 4/120
	I0731 18:49:38.115559  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 5/120
	I0731 18:49:39.116998  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 6/120
	I0731 18:49:40.118926  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 7/120
	I0731 18:49:41.120342  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 8/120
	I0731 18:49:42.121783  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 9/120
	I0731 18:49:43.123186  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 10/120
	I0731 18:49:44.124991  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 11/120
	I0731 18:49:45.126490  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 12/120
	I0731 18:49:46.127945  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 13/120
	I0731 18:49:47.129728  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 14/120
	I0731 18:49:48.131828  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 15/120
	I0731 18:49:49.133845  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 16/120
	I0731 18:49:50.135149  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 17/120
	I0731 18:49:51.136565  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 18/120
	I0731 18:49:52.137763  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 19/120
	I0731 18:49:53.140073  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 20/120
	I0731 18:49:54.141873  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 21/120
	I0731 18:49:55.143533  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 22/120
	I0731 18:49:56.145045  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 23/120
	I0731 18:49:57.147102  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 24/120
	I0731 18:49:58.149173  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 25/120
	I0731 18:49:59.151273  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 26/120
	I0731 18:50:00.152789  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 27/120
	I0731 18:50:01.154970  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 28/120
	I0731 18:50:02.156442  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 29/120
	I0731 18:50:03.158902  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 30/120
	I0731 18:50:04.160545  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 31/120
	I0731 18:50:05.162280  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 32/120
	I0731 18:50:06.163950  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 33/120
	I0731 18:50:07.165672  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 34/120
	I0731 18:50:08.167458  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 35/120
	I0731 18:50:09.169317  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 36/120
	I0731 18:50:10.171500  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 37/120
	I0731 18:50:11.173139  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 38/120
	I0731 18:50:12.174719  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 39/120
	I0731 18:50:13.176438  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 40/120
	I0731 18:50:14.177814  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 41/120
	I0731 18:50:15.179330  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 42/120
	I0731 18:50:16.181031  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 43/120
	I0731 18:50:17.182445  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 44/120
	I0731 18:50:18.184561  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 45/120
	I0731 18:50:19.187070  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 46/120
	I0731 18:50:20.188671  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 47/120
	I0731 18:50:21.191072  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 48/120
	I0731 18:50:22.192548  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 49/120
	I0731 18:50:23.194696  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 50/120
	I0731 18:50:24.195948  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 51/120
	I0731 18:50:25.197346  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 52/120
	I0731 18:50:26.199003  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 53/120
	I0731 18:50:27.201374  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 54/120
	I0731 18:50:28.203425  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 55/120
	I0731 18:50:29.205483  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 56/120
	I0731 18:50:30.206902  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 57/120
	I0731 18:50:31.208419  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 58/120
	I0731 18:50:32.209919  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 59/120
	I0731 18:50:33.212222  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 60/120
	I0731 18:50:34.213680  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 61/120
	I0731 18:50:35.215180  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 62/120
	I0731 18:50:36.217040  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 63/120
	I0731 18:50:37.218219  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 64/120
	I0731 18:50:38.219694  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 65/120
	I0731 18:50:39.221024  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 66/120
	I0731 18:50:40.222995  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 67/120
	I0731 18:50:41.224357  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 68/120
	I0731 18:50:42.225952  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 69/120
	I0731 18:50:43.228230  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 70/120
	I0731 18:50:44.229588  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 71/120
	I0731 18:50:45.231053  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 72/120
	I0731 18:50:46.232371  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 73/120
	I0731 18:50:47.233832  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 74/120
	I0731 18:50:48.235659  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 75/120
	I0731 18:50:49.238013  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 76/120
	I0731 18:50:50.239677  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 77/120
	I0731 18:50:51.240862  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 78/120
	I0731 18:50:52.242279  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 79/120
	I0731 18:50:53.244666  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 80/120
	I0731 18:50:54.246967  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 81/120
	I0731 18:50:55.248232  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 82/120
	I0731 18:50:56.250097  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 83/120
	I0731 18:50:57.251481  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 84/120
	I0731 18:50:58.253309  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 85/120
	I0731 18:50:59.255025  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 86/120
	I0731 18:51:00.256476  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 87/120
	I0731 18:51:01.257919  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 88/120
	I0731 18:51:02.259430  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 89/120
	I0731 18:51:03.261835  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 90/120
	I0731 18:51:04.263214  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 91/120
	I0731 18:51:05.264987  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 92/120
	I0731 18:51:06.266941  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 93/120
	I0731 18:51:07.268309  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 94/120
	I0731 18:51:08.270454  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 95/120
	I0731 18:51:09.272765  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 96/120
	I0731 18:51:10.274979  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 97/120
	I0731 18:51:11.276409  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 98/120
	I0731 18:51:12.278049  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 99/120
	I0731 18:51:13.280343  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 100/120
	I0731 18:51:14.281949  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 101/120
	I0731 18:51:15.283415  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 102/120
	I0731 18:51:16.285381  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 103/120
	I0731 18:51:17.287445  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 104/120
	I0731 18:51:18.289616  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 105/120
	I0731 18:51:19.290885  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 106/120
	I0731 18:51:20.292364  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 107/120
	I0731 18:51:21.293771  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 108/120
	I0731 18:51:22.295184  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 109/120
	I0731 18:51:23.296658  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 110/120
	I0731 18:51:24.297993  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 111/120
	I0731 18:51:25.299618  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 112/120
	I0731 18:51:26.301280  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 113/120
	I0731 18:51:27.303511  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 114/120
	I0731 18:51:28.305506  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 115/120
	I0731 18:51:29.307013  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 116/120
	I0731 18:51:30.308463  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 117/120
	I0731 18:51:31.310002  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 118/120
	I0731 18:51:32.312274  422046 main.go:141] libmachine: (ha-326651-m04) Waiting for machine to stop 119/120
	I0731 18:51:33.313206  422046 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 18:51:33.313285  422046 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 18:51:33.315267  422046 out.go:177] 
	W0731 18:51:33.316727  422046 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 18:51:33.316749  422046 out.go:239] * 
	* 
	W0731 18:51:33.319859  422046 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 18:51:33.321605  422046 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-326651 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr: exit status 3 (18.873172937s)

                                                
                                                
-- stdout --
	ha-326651
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-326651-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:51:33.368727  422481 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:51:33.368837  422481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:51:33.368845  422481 out.go:304] Setting ErrFile to fd 2...
	I0731 18:51:33.368849  422481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:51:33.369028  422481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:51:33.369185  422481 out.go:298] Setting JSON to false
	I0731 18:51:33.369208  422481 mustload.go:65] Loading cluster: ha-326651
	I0731 18:51:33.369241  422481 notify.go:220] Checking for updates...
	I0731 18:51:33.369574  422481 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:51:33.369590  422481 status.go:255] checking status of ha-326651 ...
	I0731 18:51:33.370001  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.370058  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.386640  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0731 18:51:33.387310  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.388071  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.388098  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.388506  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.388732  422481 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:51:33.390437  422481 status.go:330] ha-326651 host status = "Running" (err=<nil>)
	I0731 18:51:33.390466  422481 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:51:33.390902  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.390958  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.406098  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I0731 18:51:33.406513  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.406997  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.407033  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.407424  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.407620  422481 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:51:33.410517  422481 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:51:33.410966  422481 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:51:33.410990  422481 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:51:33.411231  422481 host.go:66] Checking if "ha-326651" exists ...
	I0731 18:51:33.411516  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.411554  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.426210  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43281
	I0731 18:51:33.426766  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.427389  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.427415  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.427808  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.428012  422481 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:51:33.428243  422481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:51:33.428276  422481 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:51:33.431214  422481 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:51:33.431642  422481 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:51:33.431665  422481 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:51:33.431850  422481 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:51:33.432046  422481 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:51:33.432328  422481 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:51:33.432547  422481 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:51:33.518457  422481 ssh_runner.go:195] Run: systemctl --version
	I0731 18:51:33.527617  422481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:51:33.548723  422481 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:51:33.548753  422481 api_server.go:166] Checking apiserver status ...
	I0731 18:51:33.548785  422481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:51:33.568449  422481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5045/cgroup
	W0731 18:51:33.579438  422481 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5045/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:51:33.579506  422481 ssh_runner.go:195] Run: ls
	I0731 18:51:33.586031  422481 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:51:33.590367  422481 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:51:33.590389  422481 status.go:422] ha-326651 apiserver status = Running (err=<nil>)
	I0731 18:51:33.590399  422481 status.go:257] ha-326651 status: &{Name:ha-326651 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:51:33.590415  422481 status.go:255] checking status of ha-326651-m02 ...
	I0731 18:51:33.590733  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.590778  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.607201  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0731 18:51:33.607650  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.608154  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.608183  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.608556  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.608756  422481 main.go:141] libmachine: (ha-326651-m02) Calling .GetState
	I0731 18:51:33.610283  422481 status.go:330] ha-326651-m02 host status = "Running" (err=<nil>)
	I0731 18:51:33.610302  422481 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:51:33.610743  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.610825  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.625547  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I0731 18:51:33.626012  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.626511  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.626531  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.626890  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.627056  422481 main.go:141] libmachine: (ha-326651-m02) Calling .GetIP
	I0731 18:51:33.629960  422481 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:51:33.630411  422481 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:46:55 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:51:33.630440  422481 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:51:33.630558  422481 host.go:66] Checking if "ha-326651-m02" exists ...
	I0731 18:51:33.630977  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.631027  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.647146  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0731 18:51:33.647572  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.648033  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.648061  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.648420  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.648608  422481 main.go:141] libmachine: (ha-326651-m02) Calling .DriverName
	I0731 18:51:33.648792  422481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:51:33.648813  422481 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHHostname
	I0731 18:51:33.651665  422481 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:51:33.652063  422481 main.go:141] libmachine: (ha-326651-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:a8:57", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:46:55 +0000 UTC Type:0 Mac:52:54:00:d7:a8:57 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-326651-m02 Clientid:01:52:54:00:d7:a8:57}
	I0731 18:51:33.652084  422481 main.go:141] libmachine: (ha-326651-m02) DBG | domain ha-326651-m02 has defined IP address 192.168.39.202 and MAC address 52:54:00:d7:a8:57 in network mk-ha-326651
	I0731 18:51:33.652242  422481 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHPort
	I0731 18:51:33.652429  422481 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHKeyPath
	I0731 18:51:33.652595  422481 main.go:141] libmachine: (ha-326651-m02) Calling .GetSSHUsername
	I0731 18:51:33.652727  422481 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m02/id_rsa Username:docker}
	I0731 18:51:33.737936  422481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 18:51:33.754981  422481 kubeconfig.go:125] found "ha-326651" server: "https://192.168.39.254:8443"
	I0731 18:51:33.755020  422481 api_server.go:166] Checking apiserver status ...
	I0731 18:51:33.755068  422481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 18:51:33.770131  422481 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1373/cgroup
	W0731 18:51:33.780214  422481 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1373/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 18:51:33.780302  422481 ssh_runner.go:195] Run: ls
	I0731 18:51:33.785308  422481 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 18:51:33.789656  422481 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 18:51:33.789679  422481 status.go:422] ha-326651-m02 apiserver status = Running (err=<nil>)
	I0731 18:51:33.789690  422481 status.go:257] ha-326651-m02 status: &{Name:ha-326651-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 18:51:33.789712  422481 status.go:255] checking status of ha-326651-m04 ...
	I0731 18:51:33.790020  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.790064  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.806049  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0731 18:51:33.806626  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.807141  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.807164  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.807522  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.807718  422481 main.go:141] libmachine: (ha-326651-m04) Calling .GetState
	I0731 18:51:33.809526  422481 status.go:330] ha-326651-m04 host status = "Running" (err=<nil>)
	I0731 18:51:33.809544  422481 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:51:33.809829  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.809860  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.825388  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33633
	I0731 18:51:33.825807  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.826337  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.826357  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.826711  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.826956  422481 main.go:141] libmachine: (ha-326651-m04) Calling .GetIP
	I0731 18:51:33.829447  422481 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:51:33.829890  422481 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:48:59 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:51:33.829933  422481 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:51:33.830041  422481 host.go:66] Checking if "ha-326651-m04" exists ...
	I0731 18:51:33.830437  422481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:51:33.830480  422481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:51:33.845608  422481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37429
	I0731 18:51:33.846049  422481 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:51:33.846555  422481 main.go:141] libmachine: Using API Version  1
	I0731 18:51:33.846587  422481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:51:33.846952  422481 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:51:33.847198  422481 main.go:141] libmachine: (ha-326651-m04) Calling .DriverName
	I0731 18:51:33.847373  422481 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 18:51:33.847395  422481 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHHostname
	I0731 18:51:33.850376  422481 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:51:33.850801  422481 main.go:141] libmachine: (ha-326651-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ca:72", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:48:59 +0000 UTC Type:0 Mac:52:54:00:dd:ca:72 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-326651-m04 Clientid:01:52:54:00:dd:ca:72}
	I0731 18:51:33.850829  422481 main.go:141] libmachine: (ha-326651-m04) DBG | domain ha-326651-m04 has defined IP address 192.168.39.17 and MAC address 52:54:00:dd:ca:72 in network mk-ha-326651
	I0731 18:51:33.850957  422481 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHPort
	I0731 18:51:33.851154  422481 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHKeyPath
	I0731 18:51:33.851327  422481 main.go:141] libmachine: (ha-326651-m04) Calling .GetSSHUsername
	I0731 18:51:33.851500  422481 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651-m04/id_rsa Username:docker}
	W0731 18:51:52.196585  422481 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.17:22: connect: no route to host
	W0731 18:51:52.196720  422481 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	E0731 18:51:52.196745  422481 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host
	I0731 18:51:52.196757  422481 status.go:257] ha-326651-m04 status: &{Name:ha-326651-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 18:51:52.196786  422481 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.17:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-326651 -n ha-326651
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-326651 logs -n 25: (1.74106397s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m04 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp testdata/cp-test.txt                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651:/home/docker/cp-test_ha-326651-m04_ha-326651.txt                       |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651 sudo cat                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651.txt                                 |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m02:/home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m02 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m03:/home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n                                                                 | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | ha-326651-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-326651 ssh -n ha-326651-m03 sudo cat                                          | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC | 31 Jul 24 18:39 UTC |
	|         | /home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-326651 node stop m02 -v=7                                                     | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:39 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-326651 node start m02 -v=7                                                    | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-326651 -v=7                                                           | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-326651 -v=7                                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:43 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-326651 --wait=true -v=7                                                    | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:45 UTC | 31 Jul 24 18:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-326651                                                                | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:49 UTC |                     |
	| node    | ha-326651 node delete m03 -v=7                                                   | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:49 UTC | 31 Jul 24 18:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-326651 stop -v=7                                                              | ha-326651 | jenkins | v1.33.1 | 31 Jul 24 18:49 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:45:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:45:07.569875  420284 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:45:07.570124  420284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:45:07.570132  420284 out.go:304] Setting ErrFile to fd 2...
	I0731 18:45:07.570136  420284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:45:07.570296  420284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:45:07.570852  420284 out.go:298] Setting JSON to false
	I0731 18:45:07.571852  420284 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8851,"bootTime":1722442657,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:45:07.571920  420284 start.go:139] virtualization: kvm guest
	I0731 18:45:07.574265  420284 out.go:177] * [ha-326651] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:45:07.576024  420284 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:45:07.576045  420284 notify.go:220] Checking for updates...
	I0731 18:45:07.578992  420284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:45:07.580501  420284 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:45:07.582001  420284 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:45:07.583240  420284 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:45:07.584577  420284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:45:07.586220  420284 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:45:07.586323  420284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:45:07.586738  420284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:45:07.586792  420284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:45:07.603523  420284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0731 18:45:07.604065  420284 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:45:07.604756  420284 main.go:141] libmachine: Using API Version  1
	I0731 18:45:07.604781  420284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:45:07.605211  420284 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:45:07.605415  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:45:07.641759  420284 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:45:07.643347  420284 start.go:297] selected driver: kvm2
	I0731 18:45:07.643373  420284 start.go:901] validating driver "kvm2" against &{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:45:07.643585  420284 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:45:07.644062  420284 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:45:07.644151  420284 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:45:07.660222  420284 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:45:07.660950  420284 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 18:45:07.660997  420284 cni.go:84] Creating CNI manager for ""
	I0731 18:45:07.661006  420284 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 18:45:07.661083  420284 start.go:340] cluster config:
	{Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:45:07.661253  420284 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:45:07.664071  420284 out.go:177] * Starting "ha-326651" primary control-plane node in "ha-326651" cluster
	I0731 18:45:07.665438  420284 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:45:07.665502  420284 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:45:07.665515  420284 cache.go:56] Caching tarball of preloaded images
	I0731 18:45:07.665615  420284 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 18:45:07.665629  420284 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 18:45:07.665793  420284 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/config.json ...
	I0731 18:45:07.666075  420284 start.go:360] acquireMachinesLock for ha-326651: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 18:45:07.666131  420284 start.go:364] duration metric: took 33.292µs to acquireMachinesLock for "ha-326651"
	I0731 18:45:07.666150  420284 start.go:96] Skipping create...Using existing machine configuration
	I0731 18:45:07.666160  420284 fix.go:54] fixHost starting: 
	I0731 18:45:07.666462  420284 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:45:07.666497  420284 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:45:07.681525  420284 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42741
	I0731 18:45:07.681952  420284 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:45:07.682419  420284 main.go:141] libmachine: Using API Version  1
	I0731 18:45:07.682438  420284 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:45:07.682771  420284 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:45:07.682984  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:45:07.683174  420284 main.go:141] libmachine: (ha-326651) Calling .GetState
	I0731 18:45:07.684757  420284 fix.go:112] recreateIfNeeded on ha-326651: state=Running err=<nil>
	W0731 18:45:07.684786  420284 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 18:45:07.687627  420284 out.go:177] * Updating the running kvm2 "ha-326651" VM ...
	I0731 18:45:07.689019  420284 machine.go:94] provisionDockerMachine start ...
	I0731 18:45:07.689046  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:45:07.689245  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:07.692042  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.692623  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:07.692659  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.692807  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:07.693054  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.693217  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.693343  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:07.693551  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:07.693758  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:07.693770  420284 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 18:45:07.806460  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651
	
	I0731 18:45:07.806492  420284 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:45:07.806744  420284 buildroot.go:166] provisioning hostname "ha-326651"
	I0731 18:45:07.806771  420284 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:45:07.806993  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:07.810013  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.810406  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:07.810430  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.810633  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:07.810819  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.810980  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.811099  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:07.811266  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:07.811445  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:07.811456  420284 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-326651 && echo "ha-326651" | sudo tee /etc/hostname
	I0731 18:45:07.939830  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-326651
	
	I0731 18:45:07.939868  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:07.943013  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.943378  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:07.943402  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:07.943614  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:07.943829  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.944013  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:07.944197  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:07.944427  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:07.944662  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:07.944694  420284 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-326651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-326651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-326651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 18:45:08.057766  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 18:45:08.057798  420284 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 18:45:08.057849  420284 buildroot.go:174] setting up certificates
	I0731 18:45:08.057864  420284 provision.go:84] configureAuth start
	I0731 18:45:08.057881  420284 main.go:141] libmachine: (ha-326651) Calling .GetMachineName
	I0731 18:45:08.058170  420284 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:45:08.061104  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.061504  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.061534  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.061718  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:08.064180  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.064592  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.064618  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.064764  420284 provision.go:143] copyHostCerts
	I0731 18:45:08.064809  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:45:08.064867  420284 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 18:45:08.064877  420284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 18:45:08.064962  420284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 18:45:08.065102  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:45:08.065140  420284 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 18:45:08.065145  420284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 18:45:08.065185  420284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 18:45:08.065275  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:45:08.065300  420284 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 18:45:08.065309  420284 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 18:45:08.065346  420284 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 18:45:08.065438  420284 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.ha-326651 san=[127.0.0.1 192.168.39.220 ha-326651 localhost minikube]
	I0731 18:45:08.369389  420284 provision.go:177] copyRemoteCerts
	I0731 18:45:08.369459  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 18:45:08.369486  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:08.372569  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.372948  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.372984  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.373155  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:08.373479  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:08.373656  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:08.373815  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:45:08.459621  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 18:45:08.459710  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 18:45:08.489737  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 18:45:08.489806  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0731 18:45:08.516509  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 18:45:08.516599  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 18:45:08.541445  420284 provision.go:87] duration metric: took 483.565088ms to configureAuth
	I0731 18:45:08.541484  420284 buildroot.go:189] setting minikube options for container-runtime
	I0731 18:45:08.541704  420284 config.go:182] Loaded profile config "ha-326651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:45:08.541776  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:45:08.544396  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.544803  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:45:08.544835  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:45:08.544992  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:45:08.545203  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:08.545342  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:45:08.545514  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:45:08.545756  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:45:08.545992  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:45:08.546017  420284 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 18:46:39.526631  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 18:46:39.526663  420284 machine.go:97] duration metric: took 1m31.837624181s to provisionDockerMachine
	I0731 18:46:39.526678  420284 start.go:293] postStartSetup for "ha-326651" (driver="kvm2")
	I0731 18:46:39.526690  420284 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 18:46:39.526710  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.527088  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 18:46:39.527122  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.530742  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.531182  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.531210  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.531397  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.531608  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.531887  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.532046  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:46:39.617742  420284 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 18:46:39.622777  420284 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 18:46:39.622814  420284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 18:46:39.622903  420284 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 18:46:39.623058  420284 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 18:46:39.623079  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 18:46:39.623225  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 18:46:39.633805  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:46:39.660049  420284 start.go:296] duration metric: took 133.352181ms for postStartSetup
	I0731 18:46:39.660123  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.660484  420284 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 18:46:39.660516  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.663590  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.664174  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.664204  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.664369  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.664598  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.664789  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.664958  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	W0731 18:46:39.748458  420284 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 18:46:39.748488  420284 fix.go:56] duration metric: took 1m32.082329042s for fixHost
	I0731 18:46:39.748517  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.751306  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.751738  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.751768  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.751923  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.752202  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.752413  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.752550  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.752728  420284 main.go:141] libmachine: Using SSH client type: native
	I0731 18:46:39.752933  420284 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0731 18:46:39.752946  420284 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 18:46:39.862986  420284 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722451599.826671641
	
	I0731 18:46:39.863011  420284 fix.go:216] guest clock: 1722451599.826671641
	I0731 18:46:39.863020  420284 fix.go:229] Guest: 2024-07-31 18:46:39.826671641 +0000 UTC Remote: 2024-07-31 18:46:39.74849775 +0000 UTC m=+92.215740015 (delta=78.173891ms)
	I0731 18:46:39.863046  420284 fix.go:200] guest clock delta is within tolerance: 78.173891ms
	I0731 18:46:39.863054  420284 start.go:83] releasing machines lock for "ha-326651", held for 1m32.19691177s
	I0731 18:46:39.863081  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.863372  420284 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:46:39.865850  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.866243  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.866274  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.866392  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.866899  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.867075  420284 main.go:141] libmachine: (ha-326651) Calling .DriverName
	I0731 18:46:39.867161  420284 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 18:46:39.867202  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.867303  420284 ssh_runner.go:195] Run: cat /version.json
	I0731 18:46:39.867320  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHHostname
	I0731 18:46:39.870223  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.870360  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.870619  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.870647  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.870834  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.870842  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:39.870866  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:39.871042  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.871068  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHPort
	I0731 18:46:39.871227  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.871246  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHKeyPath
	I0731 18:46:39.871459  420284 main.go:141] libmachine: (ha-326651) Calling .GetSSHUsername
	I0731 18:46:39.871450  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:46:39.871595  420284 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/ha-326651/id_rsa Username:docker}
	I0731 18:46:39.950297  420284 ssh_runner.go:195] Run: systemctl --version
	I0731 18:46:39.974393  420284 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 18:46:40.140781  420284 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 18:46:40.147030  420284 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 18:46:40.147153  420284 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 18:46:40.158842  420284 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 18:46:40.158869  420284 start.go:495] detecting cgroup driver to use...
	I0731 18:46:40.158936  420284 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 18:46:40.181327  420284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 18:46:40.198423  420284 docker.go:217] disabling cri-docker service (if available) ...
	I0731 18:46:40.198506  420284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 18:46:40.214757  420284 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 18:46:40.231415  420284 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 18:46:40.417287  420284 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 18:46:40.580540  420284 docker.go:233] disabling docker service ...
	I0731 18:46:40.580623  420284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 18:46:40.597414  420284 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 18:46:40.611142  420284 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 18:46:40.760442  420284 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 18:46:40.910810  420284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 18:46:40.924560  420284 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 18:46:40.944429  420284 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 18:46:40.944506  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.955406  420284 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 18:46:40.955489  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.965730  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.976562  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:40.986790  420284 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 18:46:40.997311  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:41.007580  420284 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:41.019068  420284 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 18:46:41.029644  420284 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 18:46:41.039109  420284 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 18:46:41.048174  420284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:46:41.200673  420284 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 18:46:41.493434  420284 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 18:46:41.493544  420284 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 18:46:41.498615  420284 start.go:563] Will wait 60s for crictl version
	I0731 18:46:41.498681  420284 ssh_runner.go:195] Run: which crictl
	I0731 18:46:41.502858  420284 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 18:46:41.545456  420284 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 18:46:41.545554  420284 ssh_runner.go:195] Run: crio --version
	I0731 18:46:41.576407  420284 ssh_runner.go:195] Run: crio --version
	I0731 18:46:41.606904  420284 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 18:46:41.608103  420284 main.go:141] libmachine: (ha-326651) Calling .GetIP
	I0731 18:46:41.610560  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:41.610919  420284 main.go:141] libmachine: (ha-326651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:eb:7a:d3", ip: ""} in network mk-ha-326651: {Iface:virbr1 ExpiryTime:2024-07-31 19:34:55 +0000 UTC Type:0 Mac:52:54:00:eb:7a:d3 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-326651 Clientid:01:52:54:00:eb:7a:d3}
	I0731 18:46:41.610941  420284 main.go:141] libmachine: (ha-326651) DBG | domain ha-326651 has defined IP address 192.168.39.220 and MAC address 52:54:00:eb:7a:d3 in network mk-ha-326651
	I0731 18:46:41.611115  420284 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 18:46:41.616019  420284 kubeadm.go:883] updating cluster {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 18:46:41.616195  420284 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:46:41.616243  420284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:46:41.659725  420284 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:46:41.659752  420284 crio.go:433] Images already preloaded, skipping extraction
	I0731 18:46:41.659806  420284 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 18:46:41.693988  420284 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 18:46:41.694020  420284 cache_images.go:84] Images are preloaded, skipping loading
	I0731 18:46:41.694034  420284 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.30.3 crio true true} ...
	I0731 18:46:41.694186  420284 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-326651 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 18:46:41.694270  420284 ssh_runner.go:195] Run: crio config
	I0731 18:46:41.742178  420284 cni.go:84] Creating CNI manager for ""
	I0731 18:46:41.742204  420284 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 18:46:41.742217  420284 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 18:46:41.742253  420284 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-326651 NodeName:ha-326651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 18:46:41.742410  420284 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-326651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 18:46:41.742434  420284 kube-vip.go:115] generating kube-vip config ...
	I0731 18:46:41.742478  420284 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 18:46:41.754646  420284 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 18:46:41.754780  420284 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 18:46:41.754863  420284 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 18:46:41.764820  420284 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 18:46:41.764931  420284 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 18:46:41.774884  420284 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 18:46:41.792614  420284 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 18:46:41.809676  420284 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 18:46:41.826698  420284 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 18:46:41.843934  420284 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 18:46:41.848768  420284 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 18:46:41.998053  420284 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 18:46:42.013081  420284 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651 for IP: 192.168.39.220
	I0731 18:46:42.013111  420284 certs.go:194] generating shared ca certs ...
	I0731 18:46:42.013133  420284 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:46:42.013313  420284 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 18:46:42.013354  420284 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 18:46:42.013373  420284 certs.go:256] generating profile certs ...
	I0731 18:46:42.013459  420284 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/client.key
	I0731 18:46:42.013489  420284 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea
	I0731 18:46:42.013504  420284 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.202 192.168.39.50 192.168.39.254]
	I0731 18:46:42.278192  420284 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea ...
	I0731 18:46:42.278230  420284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea: {Name:mk8a03ec5a011b43a140ef68f41313daf1725207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:46:42.278408  420284 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea ...
	I0731 18:46:42.278420  420284 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea: {Name:mke80c81b9fe18fc276220acecd36fb9cd9a551d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:46:42.278501  420284 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt.8d8ecfea -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt
	I0731 18:46:42.278673  420284 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key.8d8ecfea -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key
	I0731 18:46:42.278931  420284 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key
	I0731 18:46:42.278973  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 18:46:42.279011  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 18:46:42.279022  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 18:46:42.279033  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 18:46:42.279042  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 18:46:42.279067  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 18:46:42.279088  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 18:46:42.279101  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 18:46:42.279178  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 18:46:42.279216  420284 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 18:46:42.279226  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 18:46:42.279251  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 18:46:42.279557  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 18:46:42.279653  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 18:46:42.279721  420284 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 18:46:42.279759  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.279784  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.279798  420284 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.281193  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 18:46:42.307184  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 18:46:42.331020  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 18:46:42.355958  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 18:46:42.380013  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 18:46:42.403692  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 18:46:42.427924  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 18:46:42.452888  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/ha-326651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 18:46:42.477510  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 18:46:42.500911  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 18:46:42.524138  420284 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 18:46:42.548231  420284 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 18:46:42.564934  420284 ssh_runner.go:195] Run: openssl version
	I0731 18:46:42.571140  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 18:46:42.582942  420284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.588005  420284 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.588064  420284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 18:46:42.593880  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 18:46:42.603818  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 18:46:42.614851  420284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.619304  420284 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.619360  420284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 18:46:42.624978  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 18:46:42.634789  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 18:46:42.646401  420284 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.651377  420284 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.651429  420284 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 18:46:42.657107  420284 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 18:46:42.701068  420284 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 18:46:42.757161  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 18:46:42.804167  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 18:46:42.820725  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 18:46:42.842033  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 18:46:42.893202  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 18:46:43.019732  420284 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 18:46:43.129593  420284 kubeadm.go:392] StartCluster: {Name:ha-326651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-326651 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.17 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:46:43.129762  420284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 18:46:43.129824  420284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 18:46:43.326870  420284 cri.go:89] found id: "dda547983c3f5ddc08ff732c5926c9248e3084f3d726fca7f7ab4dc9501a9607"
	I0731 18:46:43.326902  420284 cri.go:89] found id: "d9d475fbcc274fb6c41cdc47837ca1e6a4aa1bcbd7ff3232a10ce55abffa5abb"
	I0731 18:46:43.326908  420284 cri.go:89] found id: "6ca2ae238d32b3d2db46923dad6103c3f22adb55d82c29497b299bc93d5d44ea"
	I0731 18:46:43.326912  420284 cri.go:89] found id: "68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33"
	I0731 18:46:43.326917  420284 cri.go:89] found id: "36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7"
	I0731 18:46:43.326921  420284 cri.go:89] found id: "81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821"
	I0731 18:46:43.326926  420284 cri.go:89] found id: "5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd"
	I0731 18:46:43.326930  420284 cri.go:89] found id: "c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd"
	I0731 18:46:43.326936  420284 cri.go:89] found id: "44a042c1af7361bc36e5d685998f2f5ec134304e667df5eb99db17109c62c40c"
	I0731 18:46:43.326947  420284 cri.go:89] found id: "bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a"
	I0731 18:46:43.326951  420284 cri.go:89] found id: ""
	I0731 18:46:43.327014  420284 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.868850409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451912868819433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7a9aacf-441e-4436-8a20-3a2e2abbaf09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.869634260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abe0db95-fb5b-490f-883f-2a982195dc6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.869714572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abe0db95-fb5b-490f-883f-2a982195dc6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.870379270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abe0db95-fb5b-490f-883f-2a982195dc6a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.919895829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ccae0b93-9c18-4dc2-b467-c11d6f0e194e name=/runtime.v1.RuntimeService/Version
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.919993907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccae0b93-9c18-4dc2-b467-c11d6f0e194e name=/runtime.v1.RuntimeService/Version
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.921760063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94578efc-ffd9-415a-951c-d23dfa031d7a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.922456593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451912922422018,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94578efc-ffd9-415a-951c-d23dfa031d7a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.923671999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c132eb5b-ffe5-4ec4-8fcb-095fbde70588 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.923817134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c132eb5b-ffe5-4ec4-8fcb-095fbde70588 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.924476118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c132eb5b-ffe5-4ec4-8fcb-095fbde70588 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.971551943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db3eb7f5-7198-47a2-9e0a-a739f833c3ae name=/runtime.v1.RuntimeService/Version
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.971627396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db3eb7f5-7198-47a2-9e0a-a739f833c3ae name=/runtime.v1.RuntimeService/Version
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.972904216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3db41a91-9cf0-456e-8721-2a532c8a6d1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.973434836Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451912973406923,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3db41a91-9cf0-456e-8721-2a532c8a6d1d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.974236558Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ece674a9-807c-4d6c-a344-b53643a2f309 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.974315186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ece674a9-807c-4d6c-a344-b53643a2f309 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:52 ha-326651 crio[3766]: time="2024-07-31 18:51:52.974752554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ece674a9-807c-4d6c-a344-b53643a2f309 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.018356081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4557b4e-fb80-4caf-9fca-7822d156d06a name=/runtime.v1.RuntimeService/Version
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.018451532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4557b4e-fb80-4caf-9fca-7822d156d06a name=/runtime.v1.RuntimeService/Version
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.020221906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69b53d6e-3e2f-49ab-bcf2-d2223bb94868 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.020686788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722451913020661958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69b53d6e-3e2f-49ab-bcf2-d2223bb94868 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.021253848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e35be996-e351-460a-83c3-ec6bb9b9db37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.021317297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e35be996-e351-460a-83c3-ec6bb9b9db37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 18:51:53 ha-326651 crio[3766]: time="2024-07-31 18:51:53.021762644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f1fbb83e91bf2ceda40cc14c1cddc73395b7049dcf9b30c8c7dd3f0b63206d8f,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722451681336397063,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722451645333809792,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722451644330060179,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5,PodSandboxId:40023062c6e42b9cde79474333e17d9fc952aa2f821c2c22e92ea539ff193a97,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722451637323072104,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83869540-accb-4a58-b094-6bdc6b4c1944,},Annotations:map[string]string{io.kubernetes.container.hash: eb25889a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23352feabf7986642ce0af9eed7d26d7903061024ddb4a187eb57e7d92d8344,PodSandboxId:874e1611bd497d7c6173ca77ebaa006b8b690d799db254f8431f45cc083fc5ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722451636596811498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annotations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7f9d0f3ec264cfe80e30bab1312a1b14be3440554b22d84b8228338d19ba81c,PodSandboxId:98c73fb71d53ae977a209a6fc86ab334dfa6805fae26a1eebbf991e93e7fcb5c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722451617293787169,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a28903eb64f52c3a48175d5e08d493d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5,PodSandboxId:e083fb2256d90c5721ce447ed5c5622459c421799811cebfd426f6e981a6197e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603753889134,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60,PodSandboxId:2307a0c144e9eddea98cd7752d8ea2c48ad95b5c60d05c78aa2dcab5db9ae930,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722451603576905002,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85,PodSandboxId:5218a1916fc1837d4b8c9344a7360e5cf1ea97957a3bb0fb7b7b504b80d82af9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722451603368681368,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1ed66a3c897be3afa8a8d6353391d1,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io
.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d,PodSandboxId:414d0bfd7c538eac9957693983bceac26e46b3548e870c74a3cb071aeea3d9e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722451603448021201,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d,PodSandboxId:e9ad7728363b0eb4dd8e4c66d3e5c4ed7bd665f1c3e1d5968ad1b9048aabbdf0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722451603430034518,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kubernetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b,PodSandboxId:e86963e2fc3dc99d97859001ca9c25283ddf024c24feb17abe5f9ccb24ac30ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722451603222459197,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a,PodSandboxId:0c86ce87b523b6fc27281c7a4019a52245144b204c66cfc26134795f43e7e904,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722451603292082892,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1314e538bc1ac5bc
50f9e801bfd0998,},Annotations:map[string]string{io.kubernetes.container.hash: 4f20c4bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094,PodSandboxId:b32cd5b71cb09162f783668f627f22537dbcd5d8aa3f9f6cddde4563c1df5de9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722451603218101185,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f413f75c914155b9a1475a6f1c2ac170d33e25b2884fe9bacee46ee9ddd05bf2,PodSandboxId:25be6f24676d4af5b1a42f2cd973275b64919ea86a4a91b3c525f4aa3e4117ee,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722451100226064648,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mknlp,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15a3f7d9-8405-4304-87da-8962e2d81f4e,},Annot
ations:map[string]string{io.kubernetes.container.hash: 37ff82ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7,PodSandboxId:8a4d6fb11ec09e71adf5fceca1221710adec7c9adf2a39cf9f6ae571a7a399c1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950607883636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p2tfn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 587a07ed-e2cf-40d1-8bc7-3800836f036e,},Annotations:map[string]string{io.kube
rnetes.container.hash: f9189930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33,PodSandboxId:d651e4190c72a4f3c5cf47fc8d4121ce49424f9a0bcbd3d36fe5443562ada1f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722450950609077674,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hsr7k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2e5422b4-4ebd-43f5-a062-d3be49c5be0a,},Annotations:map[string]string{io.kubernetes.container.hash: 451da36e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821,PodSandboxId:8783b79032fded3c3363398c839701a57efd1efa2040d1411034fed13184b838,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722450938615060095,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-n7q8p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70ddf674-b678-4b7b-bae7-fd62e1c87bb5,},Annotations:map[string]string{io.kubernetes.container.hash: 9285fcd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd,PodSandboxId:4ed8613feb5ec793011f74ac8eca0842cf4f100eb91b45b783f59b59024f2e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722450934538757500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hg6sj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 40cf0ce9-4b32-45fb-adef-577d742e433a,},Annotations:map[string]string{io.kubernetes.container.hash: c9a2d3b7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd,PodSandboxId:4bc17ce1c9d2feea25fd548144d91e888402f2801e5c660a59ddf9e2320d921f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722450914121861073,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe2900576d948009cd1a7f6741c24d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a,PodSandboxId:1e765f5d9b3b0015eb7f4a3f12c8d1264a66b5393a3d223b3c827ba2e6abfd33,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722450914001692935,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-326651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c72faff8a00fedc572d381491b77ea1,},Annotations:map[string]string{io.kubernetes.container.hash: a98be635,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e35be996-e351-460a-83c3-ec6bb9b9db37 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f1fbb83e91bf2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   40023062c6e42       storage-provisioner
	02116477a1866       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   5218a1916fc18       kube-controller-manager-ha-326651
	74114b61ea048       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   0c86ce87b523b       kube-apiserver-ha-326651
	f2cbf6604849f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   40023062c6e42       storage-provisioner
	e23352feabf79       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   874e1611bd497       busybox-fc5497c4f-mknlp
	d7f9d0f3ec264       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   98c73fb71d53a       kube-vip-ha-326651
	d9d3afedcdd25       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e083fb2256d90       coredns-7db6d8ff4d-hsr7k
	35b19bba2ba5e       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   2307a0c144e9e       kindnet-n7q8p
	68867a78c6b36       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   414d0bfd7c538       kube-proxy-hg6sj
	d626a6c1307f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e9ad7728363b0       coredns-7db6d8ff4d-p2tfn
	c3968b33a3882       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   5218a1916fc18       kube-controller-manager-ha-326651
	64795d240e81b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   0c86ce87b523b       kube-apiserver-ha-326651
	909924541d869       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   e86963e2fc3dc       etcd-ha-326651
	b98405b29355d       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   b32cd5b71cb09       kube-scheduler-ha-326651
	f413f75c91415       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   25be6f24676d4       busybox-fc5497c4f-mknlp
	68c50c65ea238       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   d651e4190c72a       coredns-7db6d8ff4d-hsr7k
	36f0c9b04bb2b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   8a4d6fb11ec09       coredns-7db6d8ff4d-p2tfn
	81362a0e08184       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   8783b79032fde       kindnet-n7q8p
	5abc9372bd5fd       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   4ed8613feb5ec       kube-proxy-hg6sj
	c40e9679adc35       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   4bc17ce1c9d2f       kube-scheduler-ha-326651
	bd3d8dbedb96a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   1e765f5d9b3b0       etcd-ha-326651
	
	
	==> coredns [36f0c9b04bb2bb59bef130f0c379630287c7d65cb9e73fd3f02d197723f8eac7] <==
	[INFO] 10.244.0.4:43466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105451s
	[INFO] 10.244.0.4:43878 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152423s
	[INFO] 10.244.0.4:49227 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079008s
	[INFO] 10.244.0.4:47339 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000074836s
	[INFO] 10.244.0.4:60002 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056953s
	[INFO] 10.244.1.2:60772 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013788s
	[INFO] 10.244.1.2:34997 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091978s
	[INFO] 10.244.2.2:48501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137292s
	[INFO] 10.244.2.2:41701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113322s
	[INFO] 10.244.2.2:46841 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000192541s
	[INFO] 10.244.2.2:37979 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066316s
	[INFO] 10.244.0.4:41261 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093714s
	[INFO] 10.244.0.4:56128 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073138s
	[INFO] 10.244.1.2:60703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131127s
	[INFO] 10.244.1.2:47436 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000239598s
	[INFO] 10.244.1.2:57459 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000181068s
	[INFO] 10.244.2.2:56898 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174969s
	[INFO] 10.244.2.2:33868 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108451s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1968&timeout=9m34s&timeoutSeconds=574&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [68c50c65ea238ff40bfbe96cc270a488ea0fd1f9142a4c52453bc647888f0e33] <==
	[INFO] 10.244.2.2:52172 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000280659s
	[INFO] 10.244.2.2:43370 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001363635s
	[INFO] 10.244.2.2:52527 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117452s
	[INFO] 10.244.2.2:48596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117278s
	[INFO] 10.244.0.4:55816 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001992063s
	[INFO] 10.244.0.4:33045 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001291238s
	[INFO] 10.244.0.4:37880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000043099s
	[INFO] 10.244.1.2:40143 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128845s
	[INFO] 10.244.1.2:48970 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000131569s
	[INFO] 10.244.0.4:57102 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000075402s
	[INFO] 10.244.0.4:54508 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004372s
	[INFO] 10.244.1.2:37053 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000194922s
	[INFO] 10.244.2.2:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129881s
	[INFO] 10.244.2.2:48437 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148815s
	[INFO] 10.244.0.4:50060 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000094079s
	[INFO] 10.244.0.4:42736 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105289s
	[INFO] 10.244.0.4:43280 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000052254s
	[INFO] 10.244.0.4:47658 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074002s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1968&timeout=9m45s&timeoutSeconds=585&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d626a6c1307f87e4104333615436ecbae66969962830fbdd65e96530d25fd33d] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1442526784]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:52.056) (total time: 10001ms):
	Trace[1442526784]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:47:02.058)
	Trace[1442526784]: [10.001645878s] [10.001645878s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52146->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1971090037]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:54.856) (total time: 13784ms):
	Trace[1971090037]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52146->10.96.0.1:443: read: connection reset by peer 13784ms (18:47:08.640)
	Trace[1971090037]: [13.784174422s] [13.784174422s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52146->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d9d3afedcdd25bf57616d1b8fb23352894009a3dd393f97a9e877a1979b3f7e5] <==
	[INFO] plugin/kubernetes: Trace[207031835]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:58.051) (total time: 10589ms):
	Trace[207031835]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47364->10.96.0.1:443: read: connection reset by peer 10589ms (18:47:08.641)
	Trace[207031835]: [10.589431725s] [10.589431725s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47364->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46052->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[925302837]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 18:46:55.225) (total time: 13415ms):
	Trace[925302837]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46052->10.96.0.1:443: read: connection reset by peer 13415ms (18:47:08.641)
	Trace[925302837]: [13.415917102s] [13.415917102s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46052->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-326651
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T18_35_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:51:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:47:30 +0000   Wed, 31 Jul 2024 18:35:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-326651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 419482855e6c4b5d814fd4a3e9e4847f
	  System UUID:                41948285-5e6c-4b5d-814f-d4a3e9e4847f
	  Boot ID:                    87f7122f-f0c1-4fc2-964d-0fcb352e2937
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mknlp              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-hsr7k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-p2tfn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-326651                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-n7q8p                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-326651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-326651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-hg6sj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-326651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-326651                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m28s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-326651 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-326651 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-326651 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-326651 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Warning  ContainerGCFailed        5m33s (x2 over 6m33s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-326651 event: Registered Node ha-326651 in Controller
	
	
	Name:               ha-326651-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_36_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:36:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:51:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 18:48:10 +0000   Wed, 31 Jul 2024 18:47:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-326651-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2e6699cde3924aaf94b25ab366c2acb8
	  System UUID:                2e6699cd-e392-4aaf-94b2-5ab366c2acb8
	  Boot ID:                    f51ac4b7-b2a2-46f0-bb97-f1e2b5e5d270
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cs6t8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-326651-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-7l9l7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-326651-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-326651-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-stqb2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-326651-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-326651-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-326651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-326651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-326651-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-326651-m02 status is now: NodeNotReady
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m44s (x8 over 4m44s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s (x8 over 4m44s)  kubelet          Node ha-326651-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s (x7 over 4m44s)  kubelet          Node ha-326651-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-326651-m02 event: Registered Node ha-326651-m02 in Controller
	
	
	Name:               ha-326651-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-326651-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=ha-326651
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T18_38_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 18:38:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-326651-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 18:49:25 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:50:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:50:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:50:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 18:49:05 +0000   Wed, 31 Jul 2024 18:50:07 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-326651-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbaa436975294cf08fb310ae9ef7d64d
	  System UUID:                cbaa4369-7529-4cf0-8fb3-10ae9ef7d64d
	  Boot ID:                    0727a61c-d910-40e1-b47d-ff631c7b025c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tlkkk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-nmwh7              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-2nq9j           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-326651-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-326651-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-326651-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-326651-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-326651-m04 event: Registered Node ha-326651-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-326651-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-326651-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-326651-m04 has been rebooted, boot id: 0727a61c-d910-40e1-b47d-ff631c7b025c
	  Normal   NodeReady                2m48s                  kubelet          Node ha-326651-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m31s)   node-controller  Node ha-326651-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 18:35] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.063136] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063799] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.163467] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.151948] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.299453] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.312604] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.062376] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.195979] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +1.049374] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.105366] systemd-fstab-generator[1374]: Ignoring "noauto" option for root device
	[  +0.092707] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.338531] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.117589] kauditd_printk_skb: 34 callbacks suppressed
	[Jul31 18:36] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 18:46] systemd-fstab-generator[3685]: Ignoring "noauto" option for root device
	[  +0.180536] systemd-fstab-generator[3697]: Ignoring "noauto" option for root device
	[  +0.181034] systemd-fstab-generator[3711]: Ignoring "noauto" option for root device
	[  +0.150445] systemd-fstab-generator[3723]: Ignoring "noauto" option for root device
	[  +0.281151] systemd-fstab-generator[3751]: Ignoring "noauto" option for root device
	[  +0.802477] systemd-fstab-generator[3852]: Ignoring "noauto" option for root device
	[ +13.298730] kauditd_printk_skb: 217 callbacks suppressed
	[Jul31 18:47] kauditd_printk_skb: 1 callbacks suppressed
	[ +19.161472] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [909924541d8690922260b5929a70f2f6c9a8d703f91fc93c456707d01e1f810b] <==
	{"level":"info","ts":"2024-07-31T18:48:23.857283Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:23.857887Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:48:24.490243Z","caller":"traceutil/trace.go:171","msg":"trace[495671616] linearizableReadLoop","detail":"{readStateIndex:2809; appliedIndex:2809; }","duration":"140.808708ms","start":"2024-07-31T18:48:24.349393Z","end":"2024-07-31T18:48:24.490202Z","steps":["trace[495671616] 'read index received'  (duration: 140.735902ms)","trace[495671616] 'applied index is now lower than readState.Index'  (duration: 70.983µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T18:48:24.490445Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.996454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T18:48:24.49058Z","caller":"traceutil/trace.go:171","msg":"trace[1656594907] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2400; }","duration":"141.248448ms","start":"2024-07-31T18:48:24.349314Z","end":"2024-07-31T18:48:24.490562Z","steps":["trace[1656594907] 'agreement among raft nodes before linearized reading'  (duration: 141.016508ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:48:24.490873Z","caller":"traceutil/trace.go:171","msg":"trace[1306060640] transaction","detail":"{read_only:false; response_revision:2401; number_of_response:1; }","duration":"167.55476ms","start":"2024-07-31T18:48:24.323303Z","end":"2024-07-31T18:48:24.490858Z","steps":["trace[1306060640] 'process raft request'  (duration: 167.373182ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:48:32.152573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.68221ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-326651-m03\" ","response":"range_response_count:1 size:6884"}
	{"level":"info","ts":"2024-07-31T18:48:32.152655Z","caller":"traceutil/trace.go:171","msg":"trace[169461020] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-326651-m03; range_end:; response_count:1; response_revision:2443; }","duration":"131.76851ms","start":"2024-07-31T18:48:32.020866Z","end":"2024-07-31T18:48:32.152635Z","steps":["trace[169461020] 'range keys from in-memory index tree'  (duration: 130.567299ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T18:49:19.262791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9bf1b68912964415 switched to configuration voters=(3586888314715132412 11236963245104710677)"}
	{"level":"info","ts":"2024-07-31T18:49:19.265444Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"6ea3af572c164fb3","local-member-id":"9bf1b68912964415","removed-remote-peer-id":"b8b1e75ba5ca8c5e","removed-remote-peer-urls":["https://192.168.39.50:2380"]}
	{"level":"info","ts":"2024-07-31T18:49:19.26561Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"warn","ts":"2024-07-31T18:49:19.266801Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:49:19.266932Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"warn","ts":"2024-07-31T18:49:19.267338Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:49:19.267492Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:49:19.26771Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"warn","ts":"2024-07-31T18:49:19.268035Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e","error":"context canceled"}
	{"level":"warn","ts":"2024-07-31T18:49:19.268122Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"b8b1e75ba5ca8c5e","error":"failed to read b8b1e75ba5ca8c5e on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-31T18:49:19.268232Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"warn","ts":"2024-07-31T18:49:19.268443Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e","error":"context canceled"}
	{"level":"info","ts":"2024-07-31T18:49:19.268496Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:49:19.268567Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:49:19.268714Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"9bf1b68912964415","removed-remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"warn","ts":"2024-07-31T18:49:19.281871Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"9bf1b68912964415","remote-peer-id-stream-handler":"9bf1b68912964415","remote-peer-id-from":"b8b1e75ba5ca8c5e"}
	{"level":"warn","ts":"2024-07-31T18:49:19.289581Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"9bf1b68912964415","remote-peer-id-stream-handler":"9bf1b68912964415","remote-peer-id-from":"b8b1e75ba5ca8c5e"}
	
	
	==> etcd [bd3d8dbedb96a7da6c1ff8a9d8b8a3881f634922cb460cc4075fd7692be18e6a] <==
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T18:45:08.688894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"629.678482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-31T18:45:08.697545Z","caller":"traceutil/trace.go:171","msg":"trace[1853778956] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; }","duration":"638.32401ms","start":"2024-07-31T18:45:08.059211Z","end":"2024-07-31T18:45:08.697535Z","steps":["trace[1853778956] 'agreement among raft nodes before linearized reading'  (duration: 629.67842ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T18:45:08.697614Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T18:45:08.059193Z","time spent":"638.408518ms","remote":"127.0.0.1:58462","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" limit:10000 "}
	2024/07/31 18:45:08 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-31T18:45:08.75305Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"9bf1b68912964415","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T18:45:08.75333Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753369Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753398Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753482Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753561Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753594Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.753623Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"31c72feb079851fc"}
	{"level":"info","ts":"2024-07-31T18:45:08.75363Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753645Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753679Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753742Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753786Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753853Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"9bf1b68912964415","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.753866Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"b8b1e75ba5ca8c5e"}
	{"level":"info","ts":"2024-07-31T18:45:08.756511Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.220:2380"}
	{"level":"info","ts":"2024-07-31T18:45:08.756672Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.220:2380"}
	{"level":"info","ts":"2024-07-31T18:45:08.756714Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-326651","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.220:2380"],"advertise-client-urls":["https://192.168.39.220:2379"]}
	
	
	==> kernel <==
	 18:51:53 up 17 min,  0 users,  load average: 0.18, 0.32, 0.26
	Linux ha-326651 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [35b19bba2ba5ef11762fea5d9a23c5baae75b45cf0228db25cb53c8cb547fe60] <==
	I0731 18:51:04.864682       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:51:14.865076       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:51:14.865272       1 main.go:299] handling current node
	I0731 18:51:14.865325       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:51:14.865362       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:51:14.865558       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:51:14.865603       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:51:24.870913       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:51:24.870997       1 main.go:299] handling current node
	I0731 18:51:24.871047       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:51:24.871053       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:51:24.871207       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:51:24.871214       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:51:34.872279       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:51:34.872392       1 main.go:299] handling current node
	I0731 18:51:34.872423       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:51:34.872441       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:51:34.872604       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:51:34.872627       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:51:44.864085       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:51:44.864387       1 main.go:299] handling current node
	I0731 18:51:44.864424       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:51:44.864445       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:51:44.864631       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:51:44.864656       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [81362a0e08184ec0df6f17812cb8d8e751be13e292f412545493c215aca8f821] <==
	I0731 18:44:39.756611       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:44:39.756631       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:44:39.756802       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:44:39.756835       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:44:39.756902       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:44:39.756919       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	I0731 18:44:49.760225       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:44:49.760398       1 main.go:299] handling current node
	I0731 18:44:49.760465       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:44:49.760521       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:44:49.760707       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:44:49.760759       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:44:49.760872       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:44:49.760915       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	E0731 18:44:57.441453       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1913&timeout=6m20s&timeoutSeconds=380&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0731 18:44:59.757045       1 main.go:295] Handling node with IPs: map[192.168.39.220:{}]
	I0731 18:44:59.757209       1 main.go:299] handling current node
	I0731 18:44:59.757245       1 main.go:295] Handling node with IPs: map[192.168.39.202:{}]
	I0731 18:44:59.757265       1 main.go:322] Node ha-326651-m02 has CIDR [10.244.1.0/24] 
	I0731 18:44:59.757429       1 main.go:295] Handling node with IPs: map[192.168.39.50:{}]
	I0731 18:44:59.757451       1 main.go:322] Node ha-326651-m03 has CIDR [10.244.2.0/24] 
	I0731 18:44:59.757543       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0731 18:44:59.757564       1 main.go:322] Node ha-326651-m04 has CIDR [10.244.3.0/24] 
	W0731 18:45:07.057554       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	E0731 18:45:07.057628       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	
	
	==> kube-apiserver [64795d240e81b6cb112c0232263a2a70d3d923be900551965c165bc90395de5a] <==
	I0731 18:46:44.183708       1 options.go:221] external host was not specified, using 192.168.39.220
	I0731 18:46:44.189288       1 server.go:148] Version: v1.30.3
	I0731 18:46:44.189473       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:46:44.601698       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 18:46:44.620794       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 18:46:44.629213       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 18:46:44.629283       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 18:46:44.629478       1 instance.go:299] Using reconciler: lease
	W0731 18:47:04.601351       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0731 18:47:04.601490       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0731 18:47:04.630676       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0731 18:47:04.630741       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [74114b61ea048fe6073f93d626c8c75686203e1e074a5973172e669410510eff] <==
	I0731 18:47:29.975628       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0731 18:47:30.047782       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 18:47:30.052775       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 18:47:30.056076       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 18:47:30.056401       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 18:47:30.056316       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 18:47:30.056356       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 18:47:30.056375       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 18:47:30.067648       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 18:47:30.070306       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 18:47:30.070389       1 policy_source.go:224] refreshing policies
	I0731 18:47:30.075645       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 18:47:30.075688       1 aggregator.go:165] initial CRD sync complete...
	I0731 18:47:30.075701       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 18:47:30.075707       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 18:47:30.075712       1 cache.go:39] Caches are synced for autoregister controller
	W0731 18:47:30.077919       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.50]
	I0731 18:47:30.081631       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 18:47:30.095574       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 18:47:30.102552       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 18:47:30.153468       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 18:47:30.962903       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 18:47:31.321227       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.50]
	W0731 18:47:51.325371       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.220]
	W0731 18:49:31.328007       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.220]
	
	
	==> kube-controller-manager [02116477a186640163031ccf4cac9785d72cf9dfdf05ef75451dcc6968632af0] <==
	E0731 18:50:02.531709       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:02.531737       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:02.531761       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:02.531784       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	I0731 18:50:08.056052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.310331ms"
	I0731 18:50:08.056944       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.377µs"
	E0731 18:50:22.532608       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:22.532733       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:22.532761       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:22.532832       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	E0731 18:50:22.532873       1 gc_controller.go:153] "Failed to get node" err="node \"ha-326651-m03\" not found" logger="pod-garbage-collector-controller" node="ha-326651-m03"
	I0731 18:50:22.545085       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-326651-m03"
	I0731 18:50:22.577789       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-326651-m03"
	I0731 18:50:22.577897       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-86n7r"
	I0731 18:50:22.609940       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-86n7r"
	I0731 18:50:22.609978       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-326651-m03"
	I0731 18:50:22.640320       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-326651-m03"
	I0731 18:50:22.641057       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-326651-m03"
	I0731 18:50:22.671486       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-326651-m03"
	I0731 18:50:22.671525       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-326651-m03"
	I0731 18:50:22.702953       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-326651-m03"
	I0731 18:50:22.702989       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lhprb"
	I0731 18:50:22.735498       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-lhprb"
	I0731 18:50:22.735534       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-326651-m03"
	I0731 18:50:22.764582       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-326651-m03"
	
	
	==> kube-controller-manager [c3968b33a3882a98f8edf0d38b7652d93ed9fd16b25e8b32be114fc87faa5e85] <==
	I0731 18:46:44.989845       1 serving.go:380] Generated self-signed cert in-memory
	I0731 18:46:45.387605       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 18:46:45.388528       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 18:46:45.390314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 18:46:45.391063       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 18:46:45.391296       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 18:46:45.391400       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0731 18:47:05.637211       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.220:8443/healthz\": dial tcp 192.168.39.220:8443: connect: connection refused"
	
	
	==> kube-proxy [5abc9372bd5fd2ce7ceaca4e1b2f1b59cd7e074b78381407e2a3f80fc420d0bd] <==
	E0731 18:44:05.155212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:08.226547       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:08.226648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:08.226674       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:08.226787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:08.226850       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:08.226794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:14.368930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:14.369234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:14.369483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:14.369578       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:14.369789       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:14.369907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:23.584585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:23.585009       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:23.585172       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:23.585206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:26.657214       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:26.657326       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:38.945380       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:38.945604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1968": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:42.017441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:42.017871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&resourceVersion=1890": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:44:45.090227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:44:45.090587       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1937": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [68867a78c6b361ec5ac92cf90d2aca3f8f8b74f0bb93a2b34f1b8660d2448b5d] <==
	I0731 18:47:24.879401       1 config.go:192] "Starting service config controller"
	I0731 18:47:24.879444       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 18:47:24.879474       1 config.go:101] "Starting endpoint slice config controller"
	I0731 18:47:24.879494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 18:47:24.880191       1 config.go:319] "Starting node config controller"
	I0731 18:47:24.880217       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0731 18:47:27.905444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.905638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.906200       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0731 18:47:27.906178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.906489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:27.906273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:27.906741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:30.977543       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:30.977719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:30.979116       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-326651&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 18:47:30.977800       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:30.979325       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 18:47:30.977883       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0731 18:47:32.779851       1 shared_informer.go:320] Caches are synced for service config
	I0731 18:47:32.881427       1 shared_informer.go:320] Caches are synced for node config
	I0731 18:47:33.080063       1 shared_informer.go:320] Caches are synced for endpoint slice config
	W0731 18:50:16.343930       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0731 18:50:16.344515       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0731 18:50:16.344651       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [b98405b29355df1746cb22d56e4682522434abe5b4859eb78a58318d17e92094] <==
	E0731 18:47:22.709030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.220:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:22.818245       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.220:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:22.818403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.220:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:23.361683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.220:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:23.361812       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.220:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:23.737699       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.220:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:23.737774       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.220:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:24.304483       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.220:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	E0731 18:47:24.304554       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.220:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.220:8443: connect: connection refused
	W0731 18:47:29.986040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:47:29.986095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 18:47:29.986250       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 18:47:29.986285       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 18:47:29.986573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 18:47:29.986620       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 18:47:29.986715       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 18:47:29.986749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 18:47:29.986819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:47:29.989264       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:47:29.991985       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:47:29.992078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 18:47:44.046326       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 18:49:15.933914       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tlkkk\": pod busybox-fc5497c4f-tlkkk is already assigned to node \"ha-326651-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-tlkkk" node="ha-326651-m04"
	E0731 18:49:15.934117       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tlkkk\": pod busybox-fc5497c4f-tlkkk is already assigned to node \"ha-326651-m04\"" pod="default/busybox-fc5497c4f-tlkkk"
	I0731 18:49:15.934210       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-tlkkk" node="ha-326651-m04"
	
	
	==> kube-scheduler [c40e9679adc35d070271a569a630e5a12d60997c15504b78bb94eee88bb0c8fd] <==
	W0731 18:45:05.352085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 18:45:05.352191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 18:45:05.936509       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 18:45:05.936661       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 18:45:06.308794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 18:45:06.308901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 18:45:06.514414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 18:45:06.514509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 18:45:06.652032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:06.652230       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 18:45:06.663388       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 18:45:06.663496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 18:45:06.744580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:06.744633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 18:45:06.913051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 18:45:06.913108       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 18:45:07.121232       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 18:45:07.121344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 18:45:07.139615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 18:45:07.139647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 18:45:07.284416       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 18:45:07.284469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 18:45:07.868544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:07.868588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 18:45:08.662471       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 18:47:46 ha-326651 kubelet[1381]: E0731 18:47:46.314302    1381 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(83869540-accb-4a58-b094-6bdc6b4c1944)\"" pod="kube-system/storage-provisioner" podUID="83869540-accb-4a58-b094-6bdc6b4c1944"
	Jul 31 18:48:01 ha-326651 kubelet[1381]: I0731 18:48:01.314264    1381 scope.go:117] "RemoveContainer" containerID="f2cbf6604849f743406213c4c85744cbe2db8223c43ea3d171ddd7826426c1e5"
	Jul 31 18:48:02 ha-326651 kubelet[1381]: I0731 18:48:02.026522    1381 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-mknlp" podStartSLOduration=582.448537193 podStartE2EDuration="9m45.026486187s" podCreationTimestamp="2024-07-31 18:38:17 +0000 UTC" firstStartedPulling="2024-07-31 18:38:17.621941084 +0000 UTC m=+177.481204405" lastFinishedPulling="2024-07-31 18:38:20.199890073 +0000 UTC m=+180.059153399" observedRunningTime="2024-07-31 18:38:21.104477981 +0000 UTC m=+180.963741322" watchObservedRunningTime="2024-07-31 18:48:02.026486187 +0000 UTC m=+761.885749528"
	Jul 31 18:48:20 ha-326651 kubelet[1381]: E0731 18:48:20.363538    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:48:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:48:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:48:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:48:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:48:24 ha-326651 kubelet[1381]: I0731 18:48:24.315212    1381 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-326651" podUID="55d22288-ccee-4e17-95b6-4a96e86fca09"
	Jul 31 18:48:24 ha-326651 kubelet[1381]: I0731 18:48:24.519209    1381 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-326651"
	Jul 31 18:49:20 ha-326651 kubelet[1381]: E0731 18:49:20.358116    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:49:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:49:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:49:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:49:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:50:20 ha-326651 kubelet[1381]: E0731 18:50:20.358439    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:50:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:50:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:50:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:50:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 18:51:20 ha-326651 kubelet[1381]: E0731 18:51:20.354914    1381 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 18:51:20 ha-326651 kubelet[1381]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 18:51:20 ha-326651 kubelet[1381]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 18:51:20 ha-326651 kubelet[1381]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 18:51:20 ha-326651 kubelet[1381]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 18:51:52.540304  422642 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19356-395032/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-326651 -n ha-326651
helpers_test.go:261: (dbg) Run:  kubectl --context ha-326651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-741077
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-741077
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-741077: exit status 82 (2m1.902405747s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-741077-m03"  ...
	* Stopping node "multinode-741077-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-741077" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-741077 --wait=true -v=8 --alsologtostderr
E0731 19:08:48.018133  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 19:10:32.742479  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 19:11:51.065695  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-741077 --wait=true -v=8 --alsologtostderr: (3m21.918696358s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-741077
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-741077 -n multinode-741077
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-741077 logs -n 25: (1.54793414s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile510041860/001/cp-test_multinode-741077-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077:/home/docker/cp-test_multinode-741077-m02_multinode-741077.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077 sudo cat                                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m02_multinode-741077.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03:/home/docker/cp-test_multinode-741077-m02_multinode-741077-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077-m03 sudo cat                                   | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m02_multinode-741077-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp testdata/cp-test.txt                                                | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile510041860/001/cp-test_multinode-741077-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077:/home/docker/cp-test_multinode-741077-m03_multinode-741077.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077 sudo cat                                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m03_multinode-741077.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02:/home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077-m02 sudo cat                                   | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-741077 node stop m03                                                          | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:06 UTC |
	| node    | multinode-741077 node start                                                             | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC | 31 Jul 24 19:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC |                     |
	| stop    | -p multinode-741077                                                                     | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC |                     |
	| start   | -p multinode-741077                                                                     | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:08 UTC | 31 Jul 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:08:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:08:42.936685  431884 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:08:42.936841  431884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:08:42.936852  431884 out.go:304] Setting ErrFile to fd 2...
	I0731 19:08:42.936859  431884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:08:42.937037  431884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:08:42.937588  431884 out.go:298] Setting JSON to false
	I0731 19:08:42.938617  431884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10266,"bootTime":1722442657,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:08:42.938681  431884 start.go:139] virtualization: kvm guest
	I0731 19:08:42.941162  431884 out.go:177] * [multinode-741077] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:08:42.942921  431884 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:08:42.942955  431884 notify.go:220] Checking for updates...
	I0731 19:08:42.945656  431884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:08:42.947015  431884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:08:42.948472  431884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:08:42.950053  431884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:08:42.951481  431884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:08:42.953269  431884 config.go:182] Loaded profile config "multinode-741077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:08:42.953387  431884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:08:42.953882  431884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:08:42.953942  431884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:08:42.970048  431884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I0731 19:08:42.970614  431884 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:08:42.971222  431884 main.go:141] libmachine: Using API Version  1
	I0731 19:08:42.971248  431884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:08:42.971589  431884 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:08:42.971792  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:08:43.007792  431884 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:08:43.009139  431884 start.go:297] selected driver: kvm2
	I0731 19:08:43.009158  431884 start.go:901] validating driver "kvm2" against &{Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:08:43.009325  431884 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:08:43.009698  431884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:08:43.009777  431884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:08:43.025166  431884 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:08:43.025924  431884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:08:43.026007  431884 cni.go:84] Creating CNI manager for ""
	I0731 19:08:43.026026  431884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 19:08:43.026096  431884 start.go:340] cluster config:
	{Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-741077 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:08:43.026260  431884 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:08:43.028901  431884 out.go:177] * Starting "multinode-741077" primary control-plane node in "multinode-741077" cluster
	I0731 19:08:43.030416  431884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:08:43.030465  431884 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:08:43.030477  431884 cache.go:56] Caching tarball of preloaded images
	I0731 19:08:43.030587  431884 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:08:43.030599  431884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:08:43.030723  431884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/config.json ...
	I0731 19:08:43.030979  431884 start.go:360] acquireMachinesLock for multinode-741077: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:08:43.031029  431884 start.go:364] duration metric: took 27.686µs to acquireMachinesLock for "multinode-741077"
	I0731 19:08:43.031049  431884 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:08:43.031058  431884 fix.go:54] fixHost starting: 
	I0731 19:08:43.031321  431884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:08:43.031357  431884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:08:43.046407  431884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I0731 19:08:43.046857  431884 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:08:43.047334  431884 main.go:141] libmachine: Using API Version  1
	I0731 19:08:43.047361  431884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:08:43.047797  431884 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:08:43.048039  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:08:43.048231  431884 main.go:141] libmachine: (multinode-741077) Calling .GetState
	I0731 19:08:43.050007  431884 fix.go:112] recreateIfNeeded on multinode-741077: state=Running err=<nil>
	W0731 19:08:43.050026  431884 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:08:43.052167  431884 out.go:177] * Updating the running kvm2 "multinode-741077" VM ...
	I0731 19:08:43.053496  431884 machine.go:94] provisionDockerMachine start ...
	I0731 19:08:43.053530  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:08:43.053772  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.056343  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.056862  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.056892  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.057069  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.057255  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.057389  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.057516  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.057683  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.057931  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.057945  431884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:08:43.161617  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-741077
	
	I0731 19:08:43.161651  431884 main.go:141] libmachine: (multinode-741077) Calling .GetMachineName
	I0731 19:08:43.161898  431884 buildroot.go:166] provisioning hostname "multinode-741077"
	I0731 19:08:43.161924  431884 main.go:141] libmachine: (multinode-741077) Calling .GetMachineName
	I0731 19:08:43.162159  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.165278  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.165669  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.165706  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.165805  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.166044  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.166279  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.166450  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.166662  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.166850  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.166873  431884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-741077 && echo "multinode-741077" | sudo tee /etc/hostname
	I0731 19:08:43.290649  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-741077
	
	I0731 19:08:43.290676  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.293528  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.293916  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.293968  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.294183  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.294398  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.294553  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.294708  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.294882  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.295099  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.295122  431884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-741077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-741077/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-741077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:08:43.398057  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:08:43.398112  431884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:08:43.398149  431884 buildroot.go:174] setting up certificates
	I0731 19:08:43.398160  431884 provision.go:84] configureAuth start
	I0731 19:08:43.398174  431884 main.go:141] libmachine: (multinode-741077) Calling .GetMachineName
	I0731 19:08:43.398483  431884 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:08:43.401464  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.401862  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.401892  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.402020  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.404799  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.405349  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.405380  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.405535  431884 provision.go:143] copyHostCerts
	I0731 19:08:43.405563  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:08:43.405590  431884 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:08:43.405599  431884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:08:43.405666  431884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:08:43.405794  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:08:43.405831  431884 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:08:43.405835  431884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:08:43.405863  431884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:08:43.405912  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:08:43.405928  431884 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:08:43.405934  431884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:08:43.405955  431884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:08:43.405999  431884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.multinode-741077 san=[127.0.0.1 192.168.39.55 localhost minikube multinode-741077]
	I0731 19:08:43.702587  431884 provision.go:177] copyRemoteCerts
	I0731 19:08:43.702649  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:08:43.702675  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.705536  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.705883  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.705913  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.706091  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.706410  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.706607  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.706804  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:08:43.791316  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:08:43.791410  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:08:43.820288  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:08:43.820359  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:08:43.848969  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:08:43.849066  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 19:08:43.879761  431884 provision.go:87] duration metric: took 481.585158ms to configureAuth
	I0731 19:08:43.879789  431884 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:08:43.880022  431884 config.go:182] Loaded profile config "multinode-741077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:08:43.880150  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.883053  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.883383  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.883401  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.883615  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.883892  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.884085  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.884244  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.884416  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.884583  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.884599  431884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:10:14.767772  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:10:14.767805  431884 machine.go:97] duration metric: took 1m31.714286716s to provisionDockerMachine
	I0731 19:10:14.767826  431884 start.go:293] postStartSetup for "multinode-741077" (driver="kvm2")
	I0731 19:10:14.767855  431884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:10:14.767883  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:14.768253  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:10:14.768300  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:14.772178  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.772674  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:14.772713  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.772898  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:14.773088  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:14.773299  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:14.773454  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:10:14.856337  431884 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:10:14.861107  431884 command_runner.go:130] > NAME=Buildroot
	I0731 19:10:14.861138  431884 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 19:10:14.861142  431884 command_runner.go:130] > ID=buildroot
	I0731 19:10:14.861147  431884 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 19:10:14.861152  431884 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 19:10:14.861186  431884 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:10:14.861199  431884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:10:14.861268  431884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:10:14.861346  431884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:10:14.861356  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 19:10:14.861436  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:10:14.872547  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:10:14.900221  431884 start.go:296] duration metric: took 132.374172ms for postStartSetup
	I0731 19:10:14.900271  431884 fix.go:56] duration metric: took 1m31.869212292s for fixHost
	I0731 19:10:14.900302  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:14.903061  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.903441  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:14.903484  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.903646  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:14.903864  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:14.904024  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:14.904155  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:14.904286  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:10:14.904516  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:10:14.904530  431884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:10:15.009569  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722453014.984856395
	
	I0731 19:10:15.009594  431884 fix.go:216] guest clock: 1722453014.984856395
	I0731 19:10:15.009604  431884 fix.go:229] Guest: 2024-07-31 19:10:14.984856395 +0000 UTC Remote: 2024-07-31 19:10:14.900278853 +0000 UTC m=+92.000956096 (delta=84.577542ms)
	I0731 19:10:15.009657  431884 fix.go:200] guest clock delta is within tolerance: 84.577542ms
	I0731 19:10:15.009667  431884 start.go:83] releasing machines lock for "multinode-741077", held for 1m31.978625357s
	I0731 19:10:15.009699  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.009977  431884 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:10:15.013169  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.013697  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:15.013725  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.013908  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.014459  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.014664  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.014770  431884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:10:15.014817  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:15.014943  431884 ssh_runner.go:195] Run: cat /version.json
	I0731 19:10:15.014968  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:15.017511  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.017709  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.017975  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:15.018001  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.018079  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:15.018110  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.018115  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:15.018309  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:15.018345  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:15.018411  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:15.018509  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:15.018528  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:15.018690  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:10:15.018686  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:10:15.114093  431884 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 19:10:15.114190  431884 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 19:10:15.114275  431884 ssh_runner.go:195] Run: systemctl --version
	I0731 19:10:15.120053  431884 command_runner.go:130] > systemd 252 (252)
	I0731 19:10:15.120109  431884 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 19:10:15.120187  431884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:10:15.285534  431884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 19:10:15.291636  431884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 19:10:15.291723  431884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:10:15.291785  431884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:10:15.301364  431884 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 19:10:15.301395  431884 start.go:495] detecting cgroup driver to use...
	I0731 19:10:15.301477  431884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:10:15.317720  431884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:10:15.332202  431884 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:10:15.332259  431884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:10:15.346838  431884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:10:15.362440  431884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:10:15.514690  431884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:10:15.663447  431884 docker.go:233] disabling docker service ...
	I0731 19:10:15.663533  431884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:10:15.681477  431884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:10:15.695228  431884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:10:15.840827  431884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:10:15.983513  431884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:10:15.998520  431884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:10:16.018656  431884 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 19:10:16.019105  431884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:10:16.019176  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.030159  431884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:10:16.030234  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.042285  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.052684  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.063375  431884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:10:16.074276  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.084806  431884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.096195  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.106717  431884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:10:16.116407  431884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 19:10:16.116486  431884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:10:16.125779  431884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:10:16.267843  431884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:10:16.520510  431884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:10:16.520589  431884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:10:16.525981  431884 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 19:10:16.526008  431884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 19:10:16.526017  431884 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I0731 19:10:16.526026  431884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 19:10:16.526042  431884 command_runner.go:130] > Access: 2024-07-31 19:10:16.395840098 +0000
	I0731 19:10:16.526080  431884 command_runner.go:130] > Modify: 2024-07-31 19:10:16.390839950 +0000
	I0731 19:10:16.526093  431884 command_runner.go:130] > Change: 2024-07-31 19:10:16.390839950 +0000
	I0731 19:10:16.526098  431884 command_runner.go:130] >  Birth: -
	I0731 19:10:16.526125  431884 start.go:563] Will wait 60s for crictl version
	I0731 19:10:16.526182  431884 ssh_runner.go:195] Run: which crictl
	I0731 19:10:16.529997  431884 command_runner.go:130] > /usr/bin/crictl
	I0731 19:10:16.530172  431884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:10:16.569102  431884 command_runner.go:130] > Version:  0.1.0
	I0731 19:10:16.569134  431884 command_runner.go:130] > RuntimeName:  cri-o
	I0731 19:10:16.569141  431884 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 19:10:16.569154  431884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 19:10:16.570412  431884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:10:16.570518  431884 ssh_runner.go:195] Run: crio --version
	I0731 19:10:16.599059  431884 command_runner.go:130] > crio version 1.29.1
	I0731 19:10:16.599090  431884 command_runner.go:130] > Version:        1.29.1
	I0731 19:10:16.599100  431884 command_runner.go:130] > GitCommit:      unknown
	I0731 19:10:16.599107  431884 command_runner.go:130] > GitCommitDate:  unknown
	I0731 19:10:16.599113  431884 command_runner.go:130] > GitTreeState:   clean
	I0731 19:10:16.599123  431884 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 19:10:16.599130  431884 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 19:10:16.599141  431884 command_runner.go:130] > Compiler:       gc
	I0731 19:10:16.599149  431884 command_runner.go:130] > Platform:       linux/amd64
	I0731 19:10:16.599157  431884 command_runner.go:130] > Linkmode:       dynamic
	I0731 19:10:16.599164  431884 command_runner.go:130] > BuildTags:      
	I0731 19:10:16.599171  431884 command_runner.go:130] >   containers_image_ostree_stub
	I0731 19:10:16.599178  431884 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 19:10:16.599188  431884 command_runner.go:130] >   btrfs_noversion
	I0731 19:10:16.599203  431884 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 19:10:16.599213  431884 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 19:10:16.599220  431884 command_runner.go:130] >   seccomp
	I0731 19:10:16.599227  431884 command_runner.go:130] > LDFlags:          unknown
	I0731 19:10:16.599237  431884 command_runner.go:130] > SeccompEnabled:   true
	I0731 19:10:16.599243  431884 command_runner.go:130] > AppArmorEnabled:  false
	I0731 19:10:16.600495  431884 ssh_runner.go:195] Run: crio --version
	I0731 19:10:16.629043  431884 command_runner.go:130] > crio version 1.29.1
	I0731 19:10:16.629066  431884 command_runner.go:130] > Version:        1.29.1
	I0731 19:10:16.629072  431884 command_runner.go:130] > GitCommit:      unknown
	I0731 19:10:16.629078  431884 command_runner.go:130] > GitCommitDate:  unknown
	I0731 19:10:16.629084  431884 command_runner.go:130] > GitTreeState:   clean
	I0731 19:10:16.629107  431884 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 19:10:16.629113  431884 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 19:10:16.629119  431884 command_runner.go:130] > Compiler:       gc
	I0731 19:10:16.629126  431884 command_runner.go:130] > Platform:       linux/amd64
	I0731 19:10:16.629134  431884 command_runner.go:130] > Linkmode:       dynamic
	I0731 19:10:16.629140  431884 command_runner.go:130] > BuildTags:      
	I0731 19:10:16.629147  431884 command_runner.go:130] >   containers_image_ostree_stub
	I0731 19:10:16.629153  431884 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 19:10:16.629162  431884 command_runner.go:130] >   btrfs_noversion
	I0731 19:10:16.629169  431884 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 19:10:16.629179  431884 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 19:10:16.629185  431884 command_runner.go:130] >   seccomp
	I0731 19:10:16.629191  431884 command_runner.go:130] > LDFlags:          unknown
	I0731 19:10:16.629198  431884 command_runner.go:130] > SeccompEnabled:   true
	I0731 19:10:16.629205  431884 command_runner.go:130] > AppArmorEnabled:  false
	I0731 19:10:16.631236  431884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:10:16.632625  431884 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:10:16.635405  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:16.635761  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:16.635791  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:16.636070  431884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:10:16.640249  431884 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 19:10:16.640359  431884 kubeadm.go:883] updating cluster {Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:10:16.640539  431884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:10:16.640604  431884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:10:16.687815  431884 command_runner.go:130] > {
	I0731 19:10:16.687838  431884 command_runner.go:130] >   "images": [
	I0731 19:10:16.687842  431884 command_runner.go:130] >     {
	I0731 19:10:16.687850  431884 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 19:10:16.687856  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.687861  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 19:10:16.687865  431884 command_runner.go:130] >       ],
	I0731 19:10:16.687869  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.687877  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 19:10:16.687887  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 19:10:16.687893  431884 command_runner.go:130] >       ],
	I0731 19:10:16.687898  431884 command_runner.go:130] >       "size": "87165492",
	I0731 19:10:16.687904  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.687912  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.687920  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.687928  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.687937  431884 command_runner.go:130] >     },
	I0731 19:10:16.687945  431884 command_runner.go:130] >     {
	I0731 19:10:16.687951  431884 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 19:10:16.687957  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.687963  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 19:10:16.687969  431884 command_runner.go:130] >       ],
	I0731 19:10:16.687973  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.687982  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 19:10:16.687996  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 19:10:16.688003  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688010  431884 command_runner.go:130] >       "size": "87174707",
	I0731 19:10:16.688019  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688033  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688041  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688048  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688052  431884 command_runner.go:130] >     },
	I0731 19:10:16.688058  431884 command_runner.go:130] >     {
	I0731 19:10:16.688064  431884 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 19:10:16.688070  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688075  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 19:10:16.688081  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688085  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688098  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 19:10:16.688113  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 19:10:16.688119  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688128  431884 command_runner.go:130] >       "size": "1363676",
	I0731 19:10:16.688145  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688151  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688155  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688161  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688165  431884 command_runner.go:130] >     },
	I0731 19:10:16.688170  431884 command_runner.go:130] >     {
	I0731 19:10:16.688177  431884 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 19:10:16.688186  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688197  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 19:10:16.688206  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688213  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688227  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 19:10:16.688245  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 19:10:16.688251  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688255  431884 command_runner.go:130] >       "size": "31470524",
	I0731 19:10:16.688263  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688272  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688282  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688294  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688302  431884 command_runner.go:130] >     },
	I0731 19:10:16.688307  431884 command_runner.go:130] >     {
	I0731 19:10:16.688319  431884 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 19:10:16.688327  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688337  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 19:10:16.688343  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688347  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688362  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 19:10:16.688388  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 19:10:16.688397  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688407  431884 command_runner.go:130] >       "size": "61245718",
	I0731 19:10:16.688416  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688426  431884 command_runner.go:130] >       "username": "nonroot",
	I0731 19:10:16.688435  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688445  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688453  431884 command_runner.go:130] >     },
	I0731 19:10:16.688461  431884 command_runner.go:130] >     {
	I0731 19:10:16.688470  431884 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 19:10:16.688479  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688492  431884 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 19:10:16.688500  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688506  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688515  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 19:10:16.688530  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 19:10:16.688539  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688549  431884 command_runner.go:130] >       "size": "150779692",
	I0731 19:10:16.688558  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.688567  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.688577  431884 command_runner.go:130] >       },
	I0731 19:10:16.688586  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688593  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688598  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688607  431884 command_runner.go:130] >     },
	I0731 19:10:16.688616  431884 command_runner.go:130] >     {
	I0731 19:10:16.688629  431884 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 19:10:16.688640  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688652  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 19:10:16.688660  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688669  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688678  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 19:10:16.688691  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 19:10:16.688700  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688710  431884 command_runner.go:130] >       "size": "117609954",
	I0731 19:10:16.688719  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.688728  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.688736  431884 command_runner.go:130] >       },
	I0731 19:10:16.688743  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688751  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688759  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688765  431884 command_runner.go:130] >     },
	I0731 19:10:16.688769  431884 command_runner.go:130] >     {
	I0731 19:10:16.688779  431884 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 19:10:16.688790  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688802  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 19:10:16.688811  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688820  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688843  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 19:10:16.688855  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 19:10:16.688860  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688867  431884 command_runner.go:130] >       "size": "112198984",
	I0731 19:10:16.688876  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.688885  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.688893  431884 command_runner.go:130] >       },
	I0731 19:10:16.688901  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688907  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688914  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688920  431884 command_runner.go:130] >     },
	I0731 19:10:16.688925  431884 command_runner.go:130] >     {
	I0731 19:10:16.688933  431884 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 19:10:16.688936  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688941  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 19:10:16.688949  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688954  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688966  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 19:10:16.688978  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 19:10:16.688986  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688996  431884 command_runner.go:130] >       "size": "85953945",
	I0731 19:10:16.689006  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.689015  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.689022  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.689026  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.689034  431884 command_runner.go:130] >     },
	I0731 19:10:16.689039  431884 command_runner.go:130] >     {
	I0731 19:10:16.689053  431884 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 19:10:16.689063  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.689074  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 19:10:16.689083  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689092  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.689103  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 19:10:16.689113  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 19:10:16.689118  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689127  431884 command_runner.go:130] >       "size": "63051080",
	I0731 19:10:16.689138  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.689147  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.689155  431884 command_runner.go:130] >       },
	I0731 19:10:16.689164  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.689173  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.689183  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.689190  431884 command_runner.go:130] >     },
	I0731 19:10:16.689194  431884 command_runner.go:130] >     {
	I0731 19:10:16.689201  431884 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 19:10:16.689210  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.689220  431884 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 19:10:16.689228  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689235  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.689249  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 19:10:16.689263  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 19:10:16.689272  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689279  431884 command_runner.go:130] >       "size": "750414",
	I0731 19:10:16.689283  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.689289  431884 command_runner.go:130] >         "value": "65535"
	I0731 19:10:16.689298  431884 command_runner.go:130] >       },
	I0731 19:10:16.689305  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.689314  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.689324  431884 command_runner.go:130] >       "pinned": true
	I0731 19:10:16.689332  431884 command_runner.go:130] >     }
	I0731 19:10:16.689341  431884 command_runner.go:130] >   ]
	I0731 19:10:16.689349  431884 command_runner.go:130] > }
	I0731 19:10:16.689944  431884 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:10:16.689971  431884 crio.go:433] Images already preloaded, skipping extraction
	I0731 19:10:16.690050  431884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:10:16.724280  431884 command_runner.go:130] > {
	I0731 19:10:16.724305  431884 command_runner.go:130] >   "images": [
	I0731 19:10:16.724310  431884 command_runner.go:130] >     {
	I0731 19:10:16.724322  431884 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 19:10:16.724328  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724336  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 19:10:16.724341  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724347  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724358  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 19:10:16.724369  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 19:10:16.724390  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724401  431884 command_runner.go:130] >       "size": "87165492",
	I0731 19:10:16.724411  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724418  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724429  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724438  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724444  431884 command_runner.go:130] >     },
	I0731 19:10:16.724450  431884 command_runner.go:130] >     {
	I0731 19:10:16.724461  431884 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 19:10:16.724471  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724482  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 19:10:16.724494  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724502  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724517  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 19:10:16.724532  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 19:10:16.724544  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724555  431884 command_runner.go:130] >       "size": "87174707",
	I0731 19:10:16.724564  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724574  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724584  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724591  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724599  431884 command_runner.go:130] >     },
	I0731 19:10:16.724605  431884 command_runner.go:130] >     {
	I0731 19:10:16.724619  431884 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 19:10:16.724629  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724639  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 19:10:16.724662  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724671  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724684  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 19:10:16.724699  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 19:10:16.724707  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724716  431884 command_runner.go:130] >       "size": "1363676",
	I0731 19:10:16.724724  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724732  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724758  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724767  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724772  431884 command_runner.go:130] >     },
	I0731 19:10:16.724776  431884 command_runner.go:130] >     {
	I0731 19:10:16.724786  431884 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 19:10:16.724796  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724806  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 19:10:16.724813  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724822  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724838  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 19:10:16.724861  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 19:10:16.724869  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724877  431884 command_runner.go:130] >       "size": "31470524",
	I0731 19:10:16.724889  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724899  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724906  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724915  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724923  431884 command_runner.go:130] >     },
	I0731 19:10:16.724929  431884 command_runner.go:130] >     {
	I0731 19:10:16.724943  431884 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 19:10:16.724952  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724960  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 19:10:16.724965  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724972  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724988  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 19:10:16.725003  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 19:10:16.725011  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725019  431884 command_runner.go:130] >       "size": "61245718",
	I0731 19:10:16.725028  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.725037  431884 command_runner.go:130] >       "username": "nonroot",
	I0731 19:10:16.725047  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725054  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725062  431884 command_runner.go:130] >     },
	I0731 19:10:16.725068  431884 command_runner.go:130] >     {
	I0731 19:10:16.725079  431884 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 19:10:16.725089  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725099  431884 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 19:10:16.725124  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725135  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725149  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 19:10:16.725163  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 19:10:16.725172  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725179  431884 command_runner.go:130] >       "size": "150779692",
	I0731 19:10:16.725188  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725195  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725208  431884 command_runner.go:130] >       },
	I0731 19:10:16.725219  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725228  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725236  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725247  431884 command_runner.go:130] >     },
	I0731 19:10:16.725255  431884 command_runner.go:130] >     {
	I0731 19:10:16.725266  431884 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 19:10:16.725275  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725283  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 19:10:16.725292  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725299  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725315  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 19:10:16.725330  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 19:10:16.725339  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725347  431884 command_runner.go:130] >       "size": "117609954",
	I0731 19:10:16.725355  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725362  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725370  431884 command_runner.go:130] >       },
	I0731 19:10:16.725377  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725387  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725395  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725403  431884 command_runner.go:130] >     },
	I0731 19:10:16.725409  431884 command_runner.go:130] >     {
	I0731 19:10:16.725421  431884 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 19:10:16.725429  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725439  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 19:10:16.725447  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725455  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725480  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 19:10:16.725502  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 19:10:16.725509  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725516  431884 command_runner.go:130] >       "size": "112198984",
	I0731 19:10:16.725525  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725532  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725541  431884 command_runner.go:130] >       },
	I0731 19:10:16.725548  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725555  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725561  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725569  431884 command_runner.go:130] >     },
	I0731 19:10:16.725575  431884 command_runner.go:130] >     {
	I0731 19:10:16.725590  431884 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 19:10:16.725599  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725609  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 19:10:16.725617  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725624  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725639  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 19:10:16.725658  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 19:10:16.725667  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725674  431884 command_runner.go:130] >       "size": "85953945",
	I0731 19:10:16.725684  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.725691  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725700  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725707  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725715  431884 command_runner.go:130] >     },
	I0731 19:10:16.725722  431884 command_runner.go:130] >     {
	I0731 19:10:16.725735  431884 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 19:10:16.725745  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725757  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 19:10:16.725765  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725772  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725786  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 19:10:16.725802  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 19:10:16.725810  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725818  431884 command_runner.go:130] >       "size": "63051080",
	I0731 19:10:16.725826  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725834  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725842  431884 command_runner.go:130] >       },
	I0731 19:10:16.725849  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725858  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725864  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725869  431884 command_runner.go:130] >     },
	I0731 19:10:16.725875  431884 command_runner.go:130] >     {
	I0731 19:10:16.725888  431884 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 19:10:16.725897  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725905  431884 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 19:10:16.725914  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725923  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725938  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 19:10:16.725953  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 19:10:16.725961  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725969  431884 command_runner.go:130] >       "size": "750414",
	I0731 19:10:16.725977  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725985  431884 command_runner.go:130] >         "value": "65535"
	I0731 19:10:16.725993  431884 command_runner.go:130] >       },
	I0731 19:10:16.726001  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.726010  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.726018  431884 command_runner.go:130] >       "pinned": true
	I0731 19:10:16.726026  431884 command_runner.go:130] >     }
	I0731 19:10:16.726034  431884 command_runner.go:130] >   ]
	I0731 19:10:16.726041  431884 command_runner.go:130] > }
	I0731 19:10:16.726173  431884 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:10:16.726186  431884 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:10:16.726196  431884 kubeadm.go:934] updating node { 192.168.39.55 8443 v1.30.3 crio true true} ...
	I0731 19:10:16.726333  431884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-741077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:10:16.726428  431884 ssh_runner.go:195] Run: crio config
	I0731 19:10:16.768123  431884 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 19:10:16.768179  431884 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 19:10:16.768187  431884 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 19:10:16.768190  431884 command_runner.go:130] > #
	I0731 19:10:16.768200  431884 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 19:10:16.768206  431884 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 19:10:16.768212  431884 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 19:10:16.768218  431884 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 19:10:16.768222  431884 command_runner.go:130] > # reload'.
	I0731 19:10:16.768228  431884 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 19:10:16.768234  431884 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 19:10:16.768244  431884 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 19:10:16.768256  431884 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 19:10:16.768263  431884 command_runner.go:130] > [crio]
	I0731 19:10:16.768273  431884 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 19:10:16.768280  431884 command_runner.go:130] > # containers images, in this directory.
	I0731 19:10:16.768298  431884 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 19:10:16.768319  431884 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 19:10:16.768332  431884 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 19:10:16.768341  431884 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 19:10:16.768561  431884 command_runner.go:130] > # imagestore = ""
	I0731 19:10:16.768580  431884 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 19:10:16.768590  431884 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 19:10:16.768703  431884 command_runner.go:130] > storage_driver = "overlay"
	I0731 19:10:16.768718  431884 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 19:10:16.768726  431884 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 19:10:16.768732  431884 command_runner.go:130] > storage_option = [
	I0731 19:10:16.769386  431884 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 19:10:16.769404  431884 command_runner.go:130] > ]
	I0731 19:10:16.769414  431884 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 19:10:16.769438  431884 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 19:10:16.769490  431884 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 19:10:16.769510  431884 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 19:10:16.769518  431884 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 19:10:16.769527  431884 command_runner.go:130] > # always happen on a node reboot
	I0731 19:10:16.769533  431884 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 19:10:16.769550  431884 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 19:10:16.769558  431884 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 19:10:16.769567  431884 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 19:10:16.769574  431884 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 19:10:16.769587  431884 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 19:10:16.769600  431884 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 19:10:16.769609  431884 command_runner.go:130] > # internal_wipe = true
	I0731 19:10:16.769619  431884 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 19:10:16.769630  431884 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 19:10:16.769640  431884 command_runner.go:130] > # internal_repair = false
	I0731 19:10:16.769653  431884 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 19:10:16.769665  431884 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 19:10:16.769676  431884 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 19:10:16.769686  431884 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 19:10:16.769697  431884 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 19:10:16.769704  431884 command_runner.go:130] > [crio.api]
	I0731 19:10:16.769713  431884 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 19:10:16.769723  431884 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 19:10:16.769734  431884 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 19:10:16.769742  431884 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 19:10:16.769753  431884 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 19:10:16.769763  431884 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 19:10:16.769770  431884 command_runner.go:130] > # stream_port = "0"
	I0731 19:10:16.769780  431884 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 19:10:16.769792  431884 command_runner.go:130] > # stream_enable_tls = false
	I0731 19:10:16.769803  431884 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 19:10:16.769811  431884 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 19:10:16.769825  431884 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 19:10:16.769836  431884 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 19:10:16.769843  431884 command_runner.go:130] > # minutes.
	I0731 19:10:16.769853  431884 command_runner.go:130] > # stream_tls_cert = ""
	I0731 19:10:16.769869  431884 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 19:10:16.769890  431884 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 19:10:16.769902  431884 command_runner.go:130] > # stream_tls_key = ""
	I0731 19:10:16.769915  431884 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 19:10:16.769927  431884 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 19:10:16.769947  431884 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 19:10:16.769958  431884 command_runner.go:130] > # stream_tls_ca = ""
	I0731 19:10:16.769970  431884 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 19:10:16.769978  431884 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 19:10:16.769992  431884 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 19:10:16.770004  431884 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 19:10:16.770014  431884 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 19:10:16.770026  431884 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 19:10:16.770034  431884 command_runner.go:130] > [crio.runtime]
	I0731 19:10:16.770042  431884 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 19:10:16.770049  431884 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 19:10:16.770057  431884 command_runner.go:130] > # "nofile=1024:2048"
	I0731 19:10:16.770067  431884 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 19:10:16.770080  431884 command_runner.go:130] > # default_ulimits = [
	I0731 19:10:16.770085  431884 command_runner.go:130] > # ]
	I0731 19:10:16.770095  431884 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 19:10:16.770101  431884 command_runner.go:130] > # no_pivot = false
	I0731 19:10:16.770111  431884 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 19:10:16.770123  431884 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 19:10:16.770132  431884 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 19:10:16.770145  431884 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 19:10:16.770158  431884 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 19:10:16.770170  431884 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 19:10:16.770182  431884 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 19:10:16.770188  431884 command_runner.go:130] > # Cgroup setting for conmon
	I0731 19:10:16.770201  431884 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 19:10:16.770210  431884 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 19:10:16.770221  431884 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 19:10:16.770234  431884 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 19:10:16.770247  431884 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 19:10:16.770256  431884 command_runner.go:130] > conmon_env = [
	I0731 19:10:16.770269  431884 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 19:10:16.770283  431884 command_runner.go:130] > ]
	I0731 19:10:16.770292  431884 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 19:10:16.770302  431884 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 19:10:16.770311  431884 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 19:10:16.770321  431884 command_runner.go:130] > # default_env = [
	I0731 19:10:16.770326  431884 command_runner.go:130] > # ]
	I0731 19:10:16.770337  431884 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 19:10:16.770373  431884 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 19:10:16.770387  431884 command_runner.go:130] > # selinux = false
	I0731 19:10:16.770398  431884 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 19:10:16.770413  431884 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 19:10:16.770423  431884 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 19:10:16.770430  431884 command_runner.go:130] > # seccomp_profile = ""
	I0731 19:10:16.770438  431884 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 19:10:16.770450  431884 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 19:10:16.770459  431884 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 19:10:16.770470  431884 command_runner.go:130] > # which might increase security.
	I0731 19:10:16.770479  431884 command_runner.go:130] > # This option is currently deprecated,
	I0731 19:10:16.770492  431884 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 19:10:16.770502  431884 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 19:10:16.770512  431884 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 19:10:16.770525  431884 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 19:10:16.770536  431884 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 19:10:16.770549  431884 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 19:10:16.770560  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.770568  431884 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 19:10:16.770581  431884 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 19:10:16.770588  431884 command_runner.go:130] > # the cgroup blockio controller.
	I0731 19:10:16.770599  431884 command_runner.go:130] > # blockio_config_file = ""
	I0731 19:10:16.770614  431884 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 19:10:16.770624  431884 command_runner.go:130] > # blockio parameters.
	I0731 19:10:16.770634  431884 command_runner.go:130] > # blockio_reload = false
	I0731 19:10:16.770645  431884 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 19:10:16.770653  431884 command_runner.go:130] > # irqbalance daemon.
	I0731 19:10:16.770661  431884 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 19:10:16.770673  431884 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 19:10:16.770683  431884 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 19:10:16.770697  431884 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 19:10:16.770740  431884 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 19:10:16.770750  431884 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 19:10:16.770758  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.770764  431884 command_runner.go:130] > # rdt_config_file = ""
	I0731 19:10:16.770772  431884 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 19:10:16.770780  431884 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 19:10:16.770804  431884 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 19:10:16.770814  431884 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 19:10:16.770823  431884 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 19:10:16.770835  431884 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 19:10:16.770841  431884 command_runner.go:130] > # will be added.
	I0731 19:10:16.770846  431884 command_runner.go:130] > # default_capabilities = [
	I0731 19:10:16.770855  431884 command_runner.go:130] > # 	"CHOWN",
	I0731 19:10:16.770861  431884 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 19:10:16.770866  431884 command_runner.go:130] > # 	"FSETID",
	I0731 19:10:16.770872  431884 command_runner.go:130] > # 	"FOWNER",
	I0731 19:10:16.770877  431884 command_runner.go:130] > # 	"SETGID",
	I0731 19:10:16.770890  431884 command_runner.go:130] > # 	"SETUID",
	I0731 19:10:16.770899  431884 command_runner.go:130] > # 	"SETPCAP",
	I0731 19:10:16.770905  431884 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 19:10:16.770914  431884 command_runner.go:130] > # 	"KILL",
	I0731 19:10:16.770920  431884 command_runner.go:130] > # ]
	I0731 19:10:16.770933  431884 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 19:10:16.770946  431884 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 19:10:16.770954  431884 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 19:10:16.770965  431884 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 19:10:16.770976  431884 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 19:10:16.770983  431884 command_runner.go:130] > default_sysctls = [
	I0731 19:10:16.770993  431884 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 19:10:16.771000  431884 command_runner.go:130] > ]
	I0731 19:10:16.771012  431884 command_runner.go:130] > # List of devices on the host that a
	I0731 19:10:16.771022  431884 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 19:10:16.771029  431884 command_runner.go:130] > # allowed_devices = [
	I0731 19:10:16.771036  431884 command_runner.go:130] > # 	"/dev/fuse",
	I0731 19:10:16.771045  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771052  431884 command_runner.go:130] > # List of additional devices. specified as
	I0731 19:10:16.771064  431884 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 19:10:16.771075  431884 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 19:10:16.771084  431884 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 19:10:16.771092  431884 command_runner.go:130] > # additional_devices = [
	I0731 19:10:16.771097  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771107  431884 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 19:10:16.771119  431884 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 19:10:16.771127  431884 command_runner.go:130] > # 	"/etc/cdi",
	I0731 19:10:16.771132  431884 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 19:10:16.771140  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771152  431884 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 19:10:16.771164  431884 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 19:10:16.771173  431884 command_runner.go:130] > # Defaults to false.
	I0731 19:10:16.771181  431884 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 19:10:16.771195  431884 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 19:10:16.771206  431884 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 19:10:16.771215  431884 command_runner.go:130] > # hooks_dir = [
	I0731 19:10:16.771222  431884 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 19:10:16.771230  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771238  431884 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 19:10:16.771250  431884 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 19:10:16.771258  431884 command_runner.go:130] > # its default mounts from the following two files:
	I0731 19:10:16.771264  431884 command_runner.go:130] > #
	I0731 19:10:16.771274  431884 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 19:10:16.771287  431884 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 19:10:16.771299  431884 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 19:10:16.771305  431884 command_runner.go:130] > #
	I0731 19:10:16.771316  431884 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 19:10:16.771329  431884 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 19:10:16.771345  431884 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 19:10:16.771356  431884 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 19:10:16.771361  431884 command_runner.go:130] > #
	I0731 19:10:16.771368  431884 command_runner.go:130] > # default_mounts_file = ""
	I0731 19:10:16.771379  431884 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 19:10:16.771388  431884 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 19:10:16.771397  431884 command_runner.go:130] > pids_limit = 1024
	I0731 19:10:16.771408  431884 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 19:10:16.771421  431884 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 19:10:16.771436  431884 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 19:10:16.771452  431884 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 19:10:16.771461  431884 command_runner.go:130] > # log_size_max = -1
	I0731 19:10:16.771471  431884 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 19:10:16.771478  431884 command_runner.go:130] > # log_to_journald = false
	I0731 19:10:16.771489  431884 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 19:10:16.771500  431884 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 19:10:16.771515  431884 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 19:10:16.771527  431884 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 19:10:16.771538  431884 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 19:10:16.771546  431884 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 19:10:16.771555  431884 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 19:10:16.771567  431884 command_runner.go:130] > # read_only = false
	I0731 19:10:16.771577  431884 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 19:10:16.771590  431884 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 19:10:16.771600  431884 command_runner.go:130] > # live configuration reload.
	I0731 19:10:16.771607  431884 command_runner.go:130] > # log_level = "info"
	I0731 19:10:16.771620  431884 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 19:10:16.771631  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.771637  431884 command_runner.go:130] > # log_filter = ""
	I0731 19:10:16.771645  431884 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 19:10:16.771657  431884 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 19:10:16.771663  431884 command_runner.go:130] > # separated by comma.
	I0731 19:10:16.771678  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771687  431884 command_runner.go:130] > # uid_mappings = ""
	I0731 19:10:16.771696  431884 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 19:10:16.771708  431884 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 19:10:16.771719  431884 command_runner.go:130] > # separated by comma.
	I0731 19:10:16.771733  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771743  431884 command_runner.go:130] > # gid_mappings = ""
	I0731 19:10:16.771753  431884 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 19:10:16.771766  431884 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 19:10:16.771776  431884 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 19:10:16.771791  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771802  431884 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 19:10:16.771812  431884 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 19:10:16.771824  431884 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 19:10:16.771838  431884 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 19:10:16.771853  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771862  431884 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 19:10:16.771871  431884 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 19:10:16.771891  431884 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 19:10:16.771904  431884 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 19:10:16.771920  431884 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 19:10:16.771931  431884 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 19:10:16.771945  431884 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 19:10:16.771955  431884 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 19:10:16.771963  431884 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 19:10:16.771974  431884 command_runner.go:130] > drop_infra_ctr = false
	I0731 19:10:16.771987  431884 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 19:10:16.771998  431884 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 19:10:16.772009  431884 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 19:10:16.772015  431884 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 19:10:16.772026  431884 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 19:10:16.772040  431884 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 19:10:16.772054  431884 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 19:10:16.772066  431884 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 19:10:16.772077  431884 command_runner.go:130] > # shared_cpuset = ""
	I0731 19:10:16.772090  431884 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 19:10:16.772101  431884 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 19:10:16.772112  431884 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 19:10:16.772127  431884 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 19:10:16.772136  431884 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 19:10:16.772147  431884 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 19:10:16.772161  431884 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 19:10:16.772168  431884 command_runner.go:130] > # enable_criu_support = false
	I0731 19:10:16.772180  431884 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 19:10:16.772193  431884 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 19:10:16.772202  431884 command_runner.go:130] > # enable_pod_events = false
	I0731 19:10:16.772213  431884 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 19:10:16.772227  431884 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 19:10:16.772238  431884 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 19:10:16.772247  431884 command_runner.go:130] > # default_runtime = "runc"
	I0731 19:10:16.772255  431884 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 19:10:16.772269  431884 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 19:10:16.772287  431884 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 19:10:16.772299  431884 command_runner.go:130] > # creation as a file is not desired either.
	I0731 19:10:16.772316  431884 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 19:10:16.772341  431884 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 19:10:16.772351  431884 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 19:10:16.772359  431884 command_runner.go:130] > # ]
	I0731 19:10:16.772370  431884 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 19:10:16.772394  431884 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 19:10:16.772406  431884 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 19:10:16.772417  431884 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 19:10:16.772422  431884 command_runner.go:130] > #
	I0731 19:10:16.772429  431884 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 19:10:16.772439  431884 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 19:10:16.772465  431884 command_runner.go:130] > # runtime_type = "oci"
	I0731 19:10:16.772476  431884 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 19:10:16.772484  431884 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 19:10:16.772493  431884 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 19:10:16.772499  431884 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 19:10:16.772508  431884 command_runner.go:130] > # monitor_env = []
	I0731 19:10:16.772514  431884 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 19:10:16.772521  431884 command_runner.go:130] > # allowed_annotations = []
	I0731 19:10:16.772531  431884 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 19:10:16.772539  431884 command_runner.go:130] > # Where:
	I0731 19:10:16.772546  431884 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 19:10:16.772558  431884 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 19:10:16.772570  431884 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 19:10:16.772582  431884 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 19:10:16.772590  431884 command_runner.go:130] > #   in $PATH.
	I0731 19:10:16.772600  431884 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 19:10:16.772610  431884 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 19:10:16.772621  431884 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 19:10:16.772629  431884 command_runner.go:130] > #   state.
	I0731 19:10:16.772639  431884 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 19:10:16.772651  431884 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 19:10:16.772663  431884 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 19:10:16.772674  431884 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 19:10:16.772687  431884 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 19:10:16.772698  431884 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 19:10:16.772708  431884 command_runner.go:130] > #   The currently recognized values are:
	I0731 19:10:16.772718  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 19:10:16.772732  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 19:10:16.772749  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 19:10:16.772761  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 19:10:16.772774  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 19:10:16.772787  431884 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 19:10:16.772799  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 19:10:16.772809  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 19:10:16.772821  431884 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 19:10:16.772833  431884 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 19:10:16.772843  431884 command_runner.go:130] > #   deprecated option "conmon".
	I0731 19:10:16.772854  431884 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 19:10:16.772865  431884 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 19:10:16.772877  431884 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 19:10:16.772893  431884 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 19:10:16.772906  431884 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 19:10:16.772917  431884 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 19:10:16.772930  431884 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 19:10:16.772941  431884 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 19:10:16.772948  431884 command_runner.go:130] > #
	I0731 19:10:16.772954  431884 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 19:10:16.772962  431884 command_runner.go:130] > #
	I0731 19:10:16.772969  431884 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 19:10:16.772977  431884 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 19:10:16.772980  431884 command_runner.go:130] > #
	I0731 19:10:16.772985  431884 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 19:10:16.772993  431884 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 19:10:16.772997  431884 command_runner.go:130] > #
	I0731 19:10:16.773004  431884 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 19:10:16.773007  431884 command_runner.go:130] > # feature.
	I0731 19:10:16.773010  431884 command_runner.go:130] > #
	I0731 19:10:16.773016  431884 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 19:10:16.773024  431884 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 19:10:16.773030  431884 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 19:10:16.773038  431884 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 19:10:16.773043  431884 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 19:10:16.773048  431884 command_runner.go:130] > #
	I0731 19:10:16.773056  431884 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 19:10:16.773066  431884 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 19:10:16.773069  431884 command_runner.go:130] > #
	I0731 19:10:16.773075  431884 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 19:10:16.773082  431884 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 19:10:16.773086  431884 command_runner.go:130] > #
	I0731 19:10:16.773092  431884 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 19:10:16.773098  431884 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 19:10:16.773106  431884 command_runner.go:130] > # limitation.
	I0731 19:10:16.773110  431884 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 19:10:16.773114  431884 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 19:10:16.773118  431884 command_runner.go:130] > runtime_type = "oci"
	I0731 19:10:16.773123  431884 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 19:10:16.773126  431884 command_runner.go:130] > runtime_config_path = ""
	I0731 19:10:16.773134  431884 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 19:10:16.773139  431884 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 19:10:16.773145  431884 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 19:10:16.773148  431884 command_runner.go:130] > monitor_env = [
	I0731 19:10:16.773153  431884 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 19:10:16.773156  431884 command_runner.go:130] > ]
	I0731 19:10:16.773162  431884 command_runner.go:130] > privileged_without_host_devices = false
	I0731 19:10:16.773171  431884 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 19:10:16.773176  431884 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 19:10:16.773184  431884 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 19:10:16.773191  431884 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 19:10:16.773200  431884 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 19:10:16.773205  431884 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 19:10:16.773216  431884 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 19:10:16.773225  431884 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 19:10:16.773231  431884 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 19:10:16.773240  431884 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 19:10:16.773243  431884 command_runner.go:130] > # Example:
	I0731 19:10:16.773248  431884 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 19:10:16.773252  431884 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 19:10:16.773256  431884 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 19:10:16.773261  431884 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 19:10:16.773264  431884 command_runner.go:130] > # cpuset = 0
	I0731 19:10:16.773268  431884 command_runner.go:130] > # cpushares = "0-1"
	I0731 19:10:16.773271  431884 command_runner.go:130] > # Where:
	I0731 19:10:16.773278  431884 command_runner.go:130] > # The workload name is workload-type.
	I0731 19:10:16.773285  431884 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 19:10:16.773290  431884 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 19:10:16.773294  431884 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 19:10:16.773301  431884 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 19:10:16.773306  431884 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 19:10:16.773310  431884 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 19:10:16.773316  431884 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 19:10:16.773320  431884 command_runner.go:130] > # Default value is set to true
	I0731 19:10:16.773324  431884 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 19:10:16.773332  431884 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 19:10:16.773336  431884 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 19:10:16.773340  431884 command_runner.go:130] > # Default value is set to 'false'
	I0731 19:10:16.773347  431884 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 19:10:16.773353  431884 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 19:10:16.773358  431884 command_runner.go:130] > #
	I0731 19:10:16.773363  431884 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 19:10:16.773369  431884 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 19:10:16.773377  431884 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 19:10:16.773383  431884 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 19:10:16.773391  431884 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 19:10:16.773395  431884 command_runner.go:130] > [crio.image]
	I0731 19:10:16.773402  431884 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 19:10:16.773410  431884 command_runner.go:130] > # default_transport = "docker://"
	I0731 19:10:16.773419  431884 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 19:10:16.773432  431884 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 19:10:16.773441  431884 command_runner.go:130] > # global_auth_file = ""
	I0731 19:10:16.773450  431884 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 19:10:16.773461  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.773467  431884 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 19:10:16.773475  431884 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 19:10:16.773481  431884 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 19:10:16.773488  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.773492  431884 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 19:10:16.773499  431884 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 19:10:16.773505  431884 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 19:10:16.773518  431884 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 19:10:16.773526  431884 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 19:10:16.773529  431884 command_runner.go:130] > # pause_command = "/pause"
	I0731 19:10:16.773535  431884 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 19:10:16.773542  431884 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 19:10:16.773548  431884 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 19:10:16.773553  431884 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 19:10:16.773560  431884 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 19:10:16.773566  431884 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 19:10:16.773571  431884 command_runner.go:130] > # pinned_images = [
	I0731 19:10:16.773575  431884 command_runner.go:130] > # ]
	I0731 19:10:16.773581  431884 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 19:10:16.773589  431884 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 19:10:16.773595  431884 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 19:10:16.773603  431884 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 19:10:16.773607  431884 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 19:10:16.773612  431884 command_runner.go:130] > # signature_policy = ""
	I0731 19:10:16.773618  431884 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 19:10:16.773626  431884 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 19:10:16.773632  431884 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 19:10:16.773640  431884 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 19:10:16.773645  431884 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 19:10:16.773652  431884 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 19:10:16.773657  431884 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 19:10:16.773665  431884 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 19:10:16.773670  431884 command_runner.go:130] > # changing them here.
	I0731 19:10:16.773679  431884 command_runner.go:130] > # insecure_registries = [
	I0731 19:10:16.773684  431884 command_runner.go:130] > # ]
	I0731 19:10:16.773697  431884 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 19:10:16.773708  431884 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 19:10:16.773718  431884 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 19:10:16.773726  431884 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 19:10:16.773736  431884 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 19:10:16.773746  431884 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 19:10:16.773751  431884 command_runner.go:130] > # CNI plugins.
	I0731 19:10:16.773755  431884 command_runner.go:130] > [crio.network]
	I0731 19:10:16.773763  431884 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 19:10:16.773771  431884 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 19:10:16.773777  431884 command_runner.go:130] > # cni_default_network = ""
	I0731 19:10:16.773783  431884 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 19:10:16.773789  431884 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 19:10:16.773794  431884 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 19:10:16.773798  431884 command_runner.go:130] > # plugin_dirs = [
	I0731 19:10:16.773804  431884 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 19:10:16.773807  431884 command_runner.go:130] > # ]
	I0731 19:10:16.773813  431884 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 19:10:16.773818  431884 command_runner.go:130] > [crio.metrics]
	I0731 19:10:16.773823  431884 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 19:10:16.773829  431884 command_runner.go:130] > enable_metrics = true
	I0731 19:10:16.773833  431884 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 19:10:16.773845  431884 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 19:10:16.773851  431884 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 19:10:16.773857  431884 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 19:10:16.773863  431884 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 19:10:16.773869  431884 command_runner.go:130] > # metrics_collectors = [
	I0731 19:10:16.773873  431884 command_runner.go:130] > # 	"operations",
	I0731 19:10:16.773877  431884 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 19:10:16.773882  431884 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 19:10:16.773891  431884 command_runner.go:130] > # 	"operations_errors",
	I0731 19:10:16.773895  431884 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 19:10:16.773900  431884 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 19:10:16.773904  431884 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 19:10:16.773910  431884 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 19:10:16.773914  431884 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 19:10:16.773920  431884 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 19:10:16.773924  431884 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 19:10:16.773928  431884 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 19:10:16.773932  431884 command_runner.go:130] > # 	"containers_oom_total",
	I0731 19:10:16.773936  431884 command_runner.go:130] > # 	"containers_oom",
	I0731 19:10:16.773940  431884 command_runner.go:130] > # 	"processes_defunct",
	I0731 19:10:16.773946  431884 command_runner.go:130] > # 	"operations_total",
	I0731 19:10:16.773953  431884 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 19:10:16.773959  431884 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 19:10:16.773968  431884 command_runner.go:130] > # 	"operations_errors_total",
	I0731 19:10:16.773976  431884 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 19:10:16.773986  431884 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 19:10:16.773995  431884 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 19:10:16.774002  431884 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 19:10:16.774013  431884 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 19:10:16.774017  431884 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 19:10:16.774022  431884 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 19:10:16.774029  431884 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 19:10:16.774036  431884 command_runner.go:130] > # ]
	I0731 19:10:16.774043  431884 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 19:10:16.774047  431884 command_runner.go:130] > # metrics_port = 9090
	I0731 19:10:16.774053  431884 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 19:10:16.774057  431884 command_runner.go:130] > # metrics_socket = ""
	I0731 19:10:16.774063  431884 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 19:10:16.774069  431884 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 19:10:16.774082  431884 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 19:10:16.774094  431884 command_runner.go:130] > # certificate on any modification event.
	I0731 19:10:16.774104  431884 command_runner.go:130] > # metrics_cert = ""
	I0731 19:10:16.774113  431884 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 19:10:16.774120  431884 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 19:10:16.774124  431884 command_runner.go:130] > # metrics_key = ""
	I0731 19:10:16.774130  431884 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 19:10:16.774135  431884 command_runner.go:130] > [crio.tracing]
	I0731 19:10:16.774141  431884 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 19:10:16.774148  431884 command_runner.go:130] > # enable_tracing = false
	I0731 19:10:16.774153  431884 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 19:10:16.774159  431884 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 19:10:16.774168  431884 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 19:10:16.774179  431884 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 19:10:16.774188  431884 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 19:10:16.774197  431884 command_runner.go:130] > [crio.nri]
	I0731 19:10:16.774207  431884 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 19:10:16.774214  431884 command_runner.go:130] > # enable_nri = false
	I0731 19:10:16.774218  431884 command_runner.go:130] > # NRI socket to listen on.
	I0731 19:10:16.774224  431884 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 19:10:16.774229  431884 command_runner.go:130] > # NRI plugin directory to use.
	I0731 19:10:16.774237  431884 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 19:10:16.774242  431884 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 19:10:16.774248  431884 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 19:10:16.774257  431884 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 19:10:16.774267  431884 command_runner.go:130] > # nri_disable_connections = false
	I0731 19:10:16.774275  431884 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 19:10:16.774285  431884 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 19:10:16.774296  431884 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 19:10:16.774306  431884 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 19:10:16.774318  431884 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 19:10:16.774326  431884 command_runner.go:130] > [crio.stats]
	I0731 19:10:16.774332  431884 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 19:10:16.774338  431884 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 19:10:16.774343  431884 command_runner.go:130] > # stats_collection_period = 0
	I0731 19:10:16.774376  431884 command_runner.go:130] ! time="2024-07-31 19:10:16.735496664Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 19:10:16.774402  431884 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 19:10:16.774545  431884 cni.go:84] Creating CNI manager for ""
	I0731 19:10:16.774555  431884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 19:10:16.774566  431884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:10:16.774603  431884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-741077 NodeName:multinode-741077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:10:16.774801  431884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-741077"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:10:16.774896  431884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:10:16.785691  431884 command_runner.go:130] > kubeadm
	I0731 19:10:16.785712  431884 command_runner.go:130] > kubectl
	I0731 19:10:16.785716  431884 command_runner.go:130] > kubelet
	I0731 19:10:16.785854  431884 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:10:16.785937  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:10:16.795573  431884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0731 19:10:16.813180  431884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:10:16.830701  431884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 19:10:16.848200  431884 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0731 19:10:16.852603  431884 command_runner.go:130] > 192.168.39.55	control-plane.minikube.internal
	I0731 19:10:16.852698  431884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:10:17.015127  431884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:10:17.030406  431884 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077 for IP: 192.168.39.55
	I0731 19:10:17.030437  431884 certs.go:194] generating shared ca certs ...
	I0731 19:10:17.030457  431884 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:10:17.030637  431884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:10:17.030698  431884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:10:17.030709  431884 certs.go:256] generating profile certs ...
	I0731 19:10:17.030838  431884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/client.key
	I0731 19:10:17.030914  431884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.key.542dcf89
	I0731 19:10:17.030967  431884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.key
	I0731 19:10:17.030983  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:10:17.031000  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:10:17.031014  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:10:17.031029  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:10:17.031046  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:10:17.031061  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:10:17.031079  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:10:17.031097  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:10:17.031174  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:10:17.031216  431884 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:10:17.031229  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:10:17.031258  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:10:17.031289  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:10:17.031320  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:10:17.031374  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:10:17.031409  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.031427  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.031446  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.032257  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:10:17.058380  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:10:17.085074  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:10:17.111086  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:10:17.136603  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 19:10:17.162266  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:10:17.187655  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:10:17.212982  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 19:10:17.237431  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:10:17.264574  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:10:17.291382  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:10:17.317058  431884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:10:17.334491  431884 ssh_runner.go:195] Run: openssl version
	I0731 19:10:17.340566  431884 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 19:10:17.340686  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:10:17.352433  431884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.357604  431884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.357634  431884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.357686  431884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.363551  431884 command_runner.go:130] > 3ec20f2e
	I0731 19:10:17.363730  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:10:17.373748  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:10:17.385174  431884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.390267  431884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.390342  431884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.390403  431884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.396104  431884 command_runner.go:130] > b5213941
	I0731 19:10:17.396190  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:10:17.405623  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:10:17.416481  431884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.421330  431884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.421464  431884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.421513  431884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.426997  431884 command_runner.go:130] > 51391683
	I0731 19:10:17.427169  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:10:17.436295  431884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:10:17.440711  431884 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:10:17.440731  431884 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 19:10:17.440739  431884 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0731 19:10:17.440745  431884 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 19:10:17.440751  431884 command_runner.go:130] > Access: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440756  431884 command_runner.go:130] > Modify: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440760  431884 command_runner.go:130] > Change: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440767  431884 command_runner.go:130] >  Birth: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440816  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:10:17.446826  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.446929  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:10:17.452927  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.453173  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:10:17.459002  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.459064  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:10:17.464758  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.465051  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:10:17.470620  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.470690  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:10:17.476253  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.476324  431884 kubeadm.go:392] StartCluster: {Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:10:17.476486  431884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:10:17.476550  431884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:10:17.513842  431884 command_runner.go:130] > d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325
	I0731 19:10:17.513872  431884 command_runner.go:130] > 1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f
	I0731 19:10:17.513878  431884 command_runner.go:130] > 3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0
	I0731 19:10:17.513885  431884 command_runner.go:130] > f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568
	I0731 19:10:17.513890  431884 command_runner.go:130] > 303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7
	I0731 19:10:17.513896  431884 command_runner.go:130] > 26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c
	I0731 19:10:17.513901  431884 command_runner.go:130] > 79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64
	I0731 19:10:17.513907  431884 command_runner.go:130] > 9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6
	I0731 19:10:17.513933  431884 cri.go:89] found id: "d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325"
	I0731 19:10:17.513941  431884 cri.go:89] found id: "1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f"
	I0731 19:10:17.513947  431884 cri.go:89] found id: "3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0"
	I0731 19:10:17.513952  431884 cri.go:89] found id: "f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568"
	I0731 19:10:17.513959  431884 cri.go:89] found id: "303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7"
	I0731 19:10:17.513964  431884 cri.go:89] found id: "26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c"
	I0731 19:10:17.513968  431884 cri.go:89] found id: "79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64"
	I0731 19:10:17.513971  431884 cri.go:89] found id: "9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6"
	I0731 19:10:17.513976  431884 cri.go:89] found id: ""
	I0731 19:10:17.514032  431884 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.496140660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453125496113872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=522f7ef5-d2e1-4450-bea5-aa384b411df4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.497168763Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c416a650-9236-4dd2-91f7-b080dcbde6e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.497244756Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c416a650-9236-4dd2-91f7-b080dcbde6e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.497776620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c416a650-9236-4dd2-91f7-b080dcbde6e5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.541728841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e045bdbe-1fd1-4c3b-91a7-b361fee7e5f1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.541803341Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e045bdbe-1fd1-4c3b-91a7-b361fee7e5f1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.543788058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b65f197-a352-4dae-8bcc-918ea4e47d4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.544253297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453125544227181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b65f197-a352-4dae-8bcc-918ea4e47d4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.544849137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9339c68b-7320-452a-9ee8-3b3ba9274e1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.544936778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9339c68b-7320-452a-9ee8-3b3ba9274e1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.545462110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9339c68b-7320-452a-9ee8-3b3ba9274e1b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.589991425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08b404d4-53f1-47d8-8cf2-909f1e7d8ee0 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.590084848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08b404d4-53f1-47d8-8cf2-909f1e7d8ee0 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.593220486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94430a86-264e-4da2-9b7f-6169b83474a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.593903827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453125593876236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94430a86-264e-4da2-9b7f-6169b83474a5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.594686427Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dda37c6c-bfb3-4142-8dcc-986852a28575 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.594760883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dda37c6c-bfb3-4142-8dcc-986852a28575 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.595099813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dda37c6c-bfb3-4142-8dcc-986852a28575 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.644095508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cfe3a2c-8f9b-4abf-b6c8-7ea5beeee019 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.644196196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cfe3a2c-8f9b-4abf-b6c8-7ea5beeee019 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.645183879Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a768f3a3-e923-4004-9114-4917799b7954 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.645751056Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453125645727684,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a768f3a3-e923-4004-9114-4917799b7954 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.646232980Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=700476f7-9190-4268-ad8d-433fae4d210d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.646291033Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=700476f7-9190-4268-ad8d-433fae4d210d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:12:05 multinode-741077 crio[2898]: time="2024-07-31 19:12:05.646700710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=700476f7-9190-4268-ad8d-433fae4d210d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b97fc6747afd6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   ee0c4e8b2c216       busybox-fc5497c4f-99dqx
	45a83b75865f6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   1e517f1100638       kindnet-4qbk6
	907d6524f1580       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   60a8a80d5e080       coredns-7db6d8ff4d-wj8lb
	2ee0df40942bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   5c12dae1148ec       storage-provisioner
	6ef6dce2926c4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   7b7e1081a8b00       kube-proxy-mw9ls
	15e863e977792       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   a86a91c1c20a9       etcd-multinode-741077
	cb176e309b9f7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   16a58c4a2fc71       kube-controller-manager-multinode-741077
	0676a8d3c1f6b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   41e31c1dc8ab6       kube-apiserver-multinode-741077
	89261b71d79a8       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   a902e23540ac9       kube-scheduler-multinode-741077
	e44e22ae723b8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   ff29ec0020d86       busybox-fc5497c4f-99dqx
	d55f850c96623       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   3f92efbc1c57f       coredns-7db6d8ff4d-wj8lb
	1816d14ba8056       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   faf07930b3c5c       storage-provisioner
	3c3762006378b       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   4576cd98ade56       kindnet-4qbk6
	f64d369909629       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   1e9ebb4a10d87       kube-proxy-mw9ls
	303524292a3a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   6f173fa22537a       etcd-multinode-741077
	26b294b731f07       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   fd0585a2250b4       kube-scheduler-multinode-741077
	79cdedb3c18fb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   78afbb02bd316       kube-controller-manager-multinode-741077
	9c1b1bd427bf0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   37559affd3c8c       kube-apiserver-multinode-741077
	
	
	==> coredns [907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50804 - 30749 "HINFO IN 6117311924496023715.820196164584186632. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020821961s
	
	
	==> coredns [d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325] <==
	[INFO] 10.244.1.2:54822 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181687s
	[INFO] 10.244.1.2:34109 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107656s
	[INFO] 10.244.1.2:50943 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009601s
	[INFO] 10.244.1.2:51483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001313488s
	[INFO] 10.244.1.2:54761 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006026s
	[INFO] 10.244.1.2:51072 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005785s
	[INFO] 10.244.1.2:45407 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074484s
	[INFO] 10.244.0.3:46460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010851s
	[INFO] 10.244.0.3:57302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065688s
	[INFO] 10.244.0.3:45034 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076394s
	[INFO] 10.244.0.3:35377 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005713s
	[INFO] 10.244.1.2:56065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129371s
	[INFO] 10.244.1.2:38412 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091324s
	[INFO] 10.244.1.2:59251 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097258s
	[INFO] 10.244.1.2:46978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154639s
	[INFO] 10.244.0.3:54413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079161s
	[INFO] 10.244.0.3:52198 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221185s
	[INFO] 10.244.0.3:48871 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077338s
	[INFO] 10.244.0.3:50362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081957s
	[INFO] 10.244.1.2:42436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148372s
	[INFO] 10.244.1.2:34597 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011086s
	[INFO] 10.244.1.2:35776 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085166s
	[INFO] 10.244.1.2:49192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086725s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-741077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-741077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=multinode-741077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_03_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:03:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741077
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:12:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-741077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7740aefdd974dac90305bd0c46ded41
	  System UUID:                c7740aef-dd97-4dac-9030-5bd0c46ded41
	  Boot ID:                    3ef53520-ca80-4f5e-bd45-a49390b976a5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-99dqx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 coredns-7db6d8ff4d-wj8lb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m24s
	  kube-system                 etcd-multinode-741077                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m38s
	  kube-system                 kindnet-4qbk6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m24s
	  kube-system                 kube-apiserver-multinode-741077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-controller-manager-multinode-741077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 kube-proxy-mw9ls                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m24s
	  kube-system                 kube-scheduler-multinode-741077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m23s                  kube-proxy       
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  8m44s (x8 over 8m44s)  kubelet          Node multinode-741077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m44s (x8 over 8m44s)  kubelet          Node multinode-741077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m44s (x7 over 8m44s)  kubelet          Node multinode-741077 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m38s                  kubelet          Node multinode-741077 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m38s                  kubelet          Node multinode-741077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s                  kubelet          Node multinode-741077 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m38s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m25s                  node-controller  Node multinode-741077 event: Registered Node multinode-741077 in Controller
	  Normal  NodeReady                8m9s                   kubelet          Node multinode-741077 status is now: NodeReady
	  Normal  Starting                 107s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  107s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s (x8 over 107s)    kubelet          Node multinode-741077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 107s)    kubelet          Node multinode-741077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 107s)    kubelet          Node multinode-741077 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                    node-controller  Node multinode-741077 event: Registered Node multinode-741077 in Controller
	
	
	Name:               multinode-741077-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-741077-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=multinode-741077
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_11_03_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:11:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741077-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:12:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:11:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:11:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:11:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:11:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    multinode-741077-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b7b3347bb304251b0635448ba8b2c2e
	  System UUID:                8b7b3347-bb30-4251-b063-5448ba8b2c2e
	  Boot ID:                    aaad1be5-26c6-4e5e-9f47-abd28880ee40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xstbr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-zjjn6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m35s
	  kube-system                 kube-proxy-k775h           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m30s                  kube-proxy  
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m35s (x2 over 7m35s)  kubelet     Node multinode-741077-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m35s (x2 over 7m35s)  kubelet     Node multinode-741077-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m35s (x2 over 7m35s)  kubelet     Node multinode-741077-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m14s                  kubelet     Node multinode-741077-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-741077-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-741077-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-741077-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-741077-m02 status is now: NodeReady
	
	
	Name:               multinode-741077-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-741077-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=multinode-741077
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_11_43_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:11:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741077-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:12:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:12:02 +0000   Wed, 31 Jul 2024 19:11:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:12:02 +0000   Wed, 31 Jul 2024 19:11:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:12:02 +0000   Wed, 31 Jul 2024 19:11:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:12:02 +0000   Wed, 31 Jul 2024 19:12:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    multinode-741077-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 faaebeb86cff43d19935aed0951ce41d
	  System UUID:                faaebeb8-6cff-43d1-9935-aed0951ce41d
	  Boot ID:                    dba06338-62c7-4eb3-b866-693a0e21ff2d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ml2nd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m38s
	  kube-system                 kube-proxy-nrftq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m32s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m43s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m38s (x2 over 6m38s)  kubelet     Node multinode-741077-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s (x2 over 6m38s)  kubelet     Node multinode-741077-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s (x2 over 6m38s)  kubelet     Node multinode-741077-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m17s                  kubelet     Node multinode-741077-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-741077-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-741077-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-741077-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m28s                  kubelet     Node multinode-741077-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  24s (x2 over 24s)      kubelet     Node multinode-741077-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x2 over 24s)      kubelet     Node multinode-741077-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x2 over 24s)      kubelet     Node multinode-741077-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-741077-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.065696] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.186152] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.123682] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.274137] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.289633] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.060117] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.135375] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +1.943447] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.099838] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.076897] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.638860] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +0.112431] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.458865] kauditd_printk_skb: 56 callbacks suppressed
	[Jul31 19:04] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 19:10] systemd-fstab-generator[2817]: Ignoring "noauto" option for root device
	[  +0.158822] systemd-fstab-generator[2829]: Ignoring "noauto" option for root device
	[  +0.174139] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.145003] systemd-fstab-generator[2855]: Ignoring "noauto" option for root device
	[  +0.282503] systemd-fstab-generator[2883]: Ignoring "noauto" option for root device
	[  +0.737511] systemd-fstab-generator[2981]: Ignoring "noauto" option for root device
	[  +1.746352] systemd-fstab-generator[3104]: Ignoring "noauto" option for root device
	[  +4.681706] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.819370] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.948175] systemd-fstab-generator[3943]: Ignoring "noauto" option for root device
	[ +18.041026] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38] <==
	{"level":"info","ts":"2024-07-31T19:10:20.060998Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T19:10:20.061008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T19:10:20.061273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 switched to configuration voters=(17042293819748820353)"}
	{"level":"info","ts":"2024-07-31T19:10:20.061343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6efde86ab6af376b","local-member-id":"ec8263ef63f6a581","added-peer-id":"ec8263ef63f6a581","added-peer-peer-urls":["https://192.168.39.55:2380"]}
	{"level":"info","ts":"2024-07-31T19:10:20.061577Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6efde86ab6af376b","local-member-id":"ec8263ef63f6a581","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:10:20.061619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:10:20.070882Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T19:10:20.075624Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:10:20.075662Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:10:20.076482Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ec8263ef63f6a581","initial-advertise-peer-urls":["https://192.168.39.55:2380"],"listen-peer-urls":["https://192.168.39.55:2380"],"advertise-client-urls":["https://192.168.39.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:10:20.076537Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:10:21.220421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T19:10:21.220551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:10:21.22061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 received MsgPreVoteResp from ec8263ef63f6a581 at term 2"}
	{"level":"info","ts":"2024-07-31T19:10:21.22064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.220665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 received MsgVoteResp from ec8263ef63f6a581 at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.220698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.220728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec8263ef63f6a581 elected leader ec8263ef63f6a581 at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.227172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:10:21.227122Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ec8263ef63f6a581","local-member-attributes":"{Name:multinode-741077 ClientURLs:[https://192.168.39.55:2379]}","request-path":"/0/members/ec8263ef63f6a581/attributes","cluster-id":"6efde86ab6af376b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:10:21.228541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:10:21.229626Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.55:2379"}
	{"level":"info","ts":"2024-07-31T19:10:21.230458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:10:21.230507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:10:21.230629Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7] <==
	{"level":"info","ts":"2024-07-31T19:05:30.706837Z","caller":"traceutil/trace.go:171","msg":"trace[27633436] transaction","detail":"{read_only:false; response_revision:662; number_of_response:1; }","duration":"192.54722ms","start":"2024-07-31T19:05:30.514271Z","end":"2024-07-31T19:05:30.706818Z","steps":["trace[27633436] 'process raft request'  (duration: 192.480323ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:05:30.707149Z","caller":"traceutil/trace.go:171","msg":"trace[1399407284] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"279.550349ms","start":"2024-07-31T19:05:30.427584Z","end":"2024-07-31T19:05:30.707134Z","steps":["trace[1399407284] 'process raft request'  (duration: 235.631983ms)","trace[1399407284] 'compare'  (duration: 43.397317ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:05:30.707409Z","caller":"traceutil/trace.go:171","msg":"trace[1121990176] linearizableReadLoop","detail":"{readStateIndex:709; appliedIndex:708; }","duration":"267.794174ms","start":"2024-07-31T19:05:30.439553Z","end":"2024-07-31T19:05:30.707348Z","steps":["trace[1121990176] 'read index received'  (duration: 73.956225ms)","trace[1121990176] 'applied index is now lower than readState.Index'  (duration: 193.836977ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:05:30.707555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.98636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-741077-m03\" ","response":"range_response_count:1 size:3023"}
	{"level":"info","ts":"2024-07-31T19:05:30.707618Z","caller":"traceutil/trace.go:171","msg":"trace[220000816] range","detail":"{range_begin:/registry/minions/multinode-741077-m03; range_end:; response_count:1; response_revision:662; }","duration":"268.078758ms","start":"2024-07-31T19:05:30.439529Z","end":"2024-07-31T19:05:30.707607Z","steps":["trace[220000816] 'agreement among raft nodes before linearized reading'  (duration: 267.982018ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.707566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.21095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T19:05:30.707808Z","caller":"traceutil/trace.go:171","msg":"trace[727598803] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:662; }","duration":"187.470036ms","start":"2024-07-31T19:05:30.520328Z","end":"2024-07-31T19:05:30.707798Z","steps":["trace[727598803] 'agreement among raft nodes before linearized reading'  (duration: 187.194283ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:05:30.93274Z","caller":"traceutil/trace.go:171","msg":"trace[1510250675] linearizableReadLoop","detail":"{readStateIndex:711; appliedIndex:710; }","duration":"175.711481ms","start":"2024-07-31T19:05:30.757013Z","end":"2024-07-31T19:05:30.932725Z","steps":["trace[1510250675] 'read index received'  (duration: 170.76679ms)","trace[1510250675] 'applied index is now lower than readState.Index'  (duration: 4.944206ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:05:30.932827Z","caller":"traceutil/trace.go:171","msg":"trace[224791766] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"214.585064ms","start":"2024-07-31T19:05:30.718236Z","end":"2024-07-31T19:05:30.932822Z","steps":["trace[224791766] 'process raft request'  (duration: 209.586039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.933013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.337542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2024-07-31T19:05:30.93421Z","caller":"traceutil/trace.go:171","msg":"trace[1794451727] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:663; }","duration":"146.559138ms","start":"2024-07-31T19:05:30.787639Z","end":"2024-07-31T19:05:30.934198Z","steps":["trace[1794451727] 'agreement among raft nodes before linearized reading'  (duration: 145.310999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.933118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.097978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T19:05:30.934522Z","caller":"traceutil/trace.go:171","msg":"trace[1781365690] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:663; }","duration":"177.530369ms","start":"2024-07-31T19:05:30.756982Z","end":"2024-07-31T19:05:30.934513Z","steps":["trace[1781365690] 'agreement among raft nodes before linearized reading'  (duration: 176.108693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.933211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.966414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-31T19:05:30.934708Z","caller":"traceutil/trace.go:171","msg":"trace[1123417128] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:663; }","duration":"146.475337ms","start":"2024-07-31T19:05:30.788226Z","end":"2024-07-31T19:05:30.934701Z","steps":["trace[1123417128] 'agreement among raft nodes before linearized reading'  (duration: 144.906877ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:08:44.01447Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T19:08:44.014618Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-741077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.55:2380"],"advertise-client-urls":["https://192.168.39.55:2379"]}
	{"level":"warn","ts":"2024-07-31T19:08:44.014823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:08:44.014942Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:08:44.10401Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.55:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:08:44.104256Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.55:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:08:44.104491Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec8263ef63f6a581","current-leader-member-id":"ec8263ef63f6a581"}
	{"level":"info","ts":"2024-07-31T19:08:44.107404Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:08:44.107587Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:08:44.107621Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-741077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.55:2380"],"advertise-client-urls":["https://192.168.39.55:2379"]}
	
	
	==> kernel <==
	 19:12:06 up 9 min,  0 users,  load average: 0.38, 0.36, 0.18
	Linux multinode-741077 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0] <==
	I0731 19:07:56.880785       1 main.go:299] handling current node
	I0731 19:08:06.884228       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:06.884422       1 main.go:299] handling current node
	I0731 19:08:06.884478       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:06.884489       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:08:06.884764       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:06.884797       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:16.876231       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:16.876342       1 main.go:299] handling current node
	I0731 19:08:16.876440       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:16.876477       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:08:16.876653       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:16.876678       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:26.877068       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:26.877183       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:26.877338       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:26.877480       1 main.go:299] handling current node
	I0731 19:08:26.877519       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:26.877585       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:08:36.884397       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:36.884453       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:36.884615       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:36.884642       1 main.go:299] handling current node
	I0731 19:08:36.884654       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:36.884658       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80] <==
	I0731 19:11:24.763634       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:11:34.763259       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:11:34.763314       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:11:34.763572       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:11:34.763615       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:11:34.763715       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:11:34.763749       1 main.go:299] handling current node
	I0731 19:11:44.763064       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:11:44.763114       1 main.go:299] handling current node
	I0731 19:11:44.763140       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:11:44.763147       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:11:44.763417       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:11:44.763428       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.2.0/24] 
	I0731 19:11:54.763903       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:11:54.764122       1 main.go:299] handling current node
	I0731 19:11:54.764173       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:11:54.764199       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:11:54.764459       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:11:54.764524       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.2.0/24] 
	I0731 19:12:04.763189       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:12:04.763327       1 main.go:299] handling current node
	I0731 19:12:04.763440       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:12:04.763458       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:12:04.763781       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:12:04.763828       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d] <==
	I0731 19:10:22.540944       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0731 19:10:22.599070       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 19:10:22.604999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:10:22.625124       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:10:22.625736       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:10:22.625795       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:10:22.626478       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 19:10:22.626520       1 policy_source.go:224] refreshing policies
	I0731 19:10:22.639290       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 19:10:22.640963       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 19:10:22.641046       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 19:10:22.645987       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:10:22.647009       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:10:22.647106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:10:22.647131       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:10:22.649733       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:10:22.686615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:10:23.517652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:10:24.906232       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:10:25.031529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 19:10:25.046463       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 19:10:25.119666       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:10:25.129874       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:10:34.986860       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:10:35.123886       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6] <==
	W0731 19:08:44.032930       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.032977       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033003       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033031       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033063       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033086       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033114       1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033139       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033166       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033202       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033232       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033261       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033275       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033289       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033319       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033321       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033353       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033486       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033526       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033557       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033607       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033637       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.034168       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.034625       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033353       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64] <==
	I0731 19:04:30.440108       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m02" podCIDRs=["10.244.1.0/24"]
	I0731 19:04:35.372828       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-741077-m02"
	I0731 19:04:51.032176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:04:53.511240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.882579ms"
	I0731 19:04:53.529167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.480211ms"
	I0731 19:04:53.529264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.707µs"
	I0731 19:04:53.539113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.976µs"
	I0731 19:04:53.561080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.133µs"
	I0731 19:04:56.972186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.596211ms"
	I0731 19:04:56.972476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.237µs"
	I0731 19:04:57.447215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.94724ms"
	I0731 19:04:57.447545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45µs"
	I0731 19:05:28.954832       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m03\" does not exist"
	I0731 19:05:28.955005       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:05:28.998956       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m03" podCIDRs=["10.244.2.0/24"]
	I0731 19:05:30.396921       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-741077-m03"
	I0731 19:05:49.092352       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:06:17.876323       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:06:18.993776       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:06:18.995993       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m03\" does not exist"
	I0731 19:06:19.012921       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m03" podCIDRs=["10.244.3.0/24"]
	I0731 19:06:38.145827       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:07:15.461493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m03"
	I0731 19:07:15.504484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.934679ms"
	I0731 19:07:15.505955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.911µs"
	
	
	==> kube-controller-manager [cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f] <==
	I0731 19:10:35.589012       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:10:35.589052       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 19:10:59.370928       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.58282ms"
	I0731 19:10:59.390308       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.194311ms"
	I0731 19:10:59.406845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.479874ms"
	I0731 19:10:59.406933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.026µs"
	I0731 19:11:03.571225       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m02\" does not exist"
	I0731 19:11:03.583427       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m02" podCIDRs=["10.244.1.0/24"]
	I0731 19:11:05.461902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="136.055µs"
	I0731 19:11:05.471885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.299µs"
	I0731 19:11:05.484246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.976µs"
	I0731 19:11:05.524195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.616µs"
	I0731 19:11:05.530794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.294µs"
	I0731 19:11:05.539044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="136.705µs"
	I0731 19:11:05.719713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.434µs"
	I0731 19:11:23.356818       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:11:23.377886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.917µs"
	I0731 19:11:23.393258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.843µs"
	I0731 19:11:26.880914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.753401ms"
	I0731 19:11:26.881166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.146µs"
	I0731 19:11:41.891146       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:11:42.782534       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m03\" does not exist"
	I0731 19:11:42.783208       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:11:42.805088       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m03" podCIDRs=["10.244.2.0/24"]
	I0731 19:12:02.687214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m03"
	
	
	==> kube-proxy [6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac] <==
	I0731 19:10:23.822904       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:10:23.844966       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	I0731 19:10:23.922542       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:10:23.922598       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:10:23.922618       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:10:23.931758       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:10:23.932122       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:10:23.932317       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:10:23.936744       1 config.go:192] "Starting service config controller"
	I0731 19:10:23.936783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:10:23.936811       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:10:23.936815       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:10:23.936852       1 config.go:319] "Starting node config controller"
	I0731 19:10:23.936873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:10:24.037336       1 shared_informer.go:320] Caches are synced for node config
	I0731 19:10:24.037428       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:10:24.037475       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568] <==
	I0731 19:03:42.161920       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:03:42.177302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	I0731 19:03:42.222755       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:03:42.222851       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:03:42.222883       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:03:42.226135       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:03:42.226486       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:03:42.226535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:03:42.228930       1 config.go:192] "Starting service config controller"
	I0731 19:03:42.229193       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:03:42.229249       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:03:42.229267       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:03:42.230128       1 config.go:319] "Starting node config controller"
	I0731 19:03:42.230680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:03:42.329670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:03:42.329752       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:03:42.331254       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c] <==
	E0731 19:03:24.768477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:03:24.768483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:03:24.768489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:03:24.768598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:03:24.768605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 19:03:25.594309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:03:25.594346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:03:25.685356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:03:25.685484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:03:25.713940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:03:25.714157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:03:25.741088       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:03:25.741203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:03:25.914774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:03:25.914893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 19:03:25.948869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:03:25.948951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 19:03:25.951357       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:03:25.951481       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:03:25.962460       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:03:25.962530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:03:26.073434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:03:26.073612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 19:03:28.465458       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:08:44.016200       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6] <==
	I0731 19:10:20.698077       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:10:22.557152       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:10:22.557267       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:10:22.557278       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:10:22.557284       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:10:22.646622       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:10:22.646662       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:10:22.648219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:10:22.648340       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:10:22.662468       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:10:22.652978       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:10:22.763271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:10:19 multinode-741077 kubelet[3111]: E0731 19:10:19.868531    3111 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-741077&limit=500&resourceVersion=0": dial tcp 192.168.39.55:8443: connect: connection refused
	Jul 31 19:10:20 multinode-741077 kubelet[3111]: I0731 19:10:20.394103    3111 kubelet_node_status.go:73] "Attempting to register node" node="multinode-741077"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.686301    3111 kubelet_node_status.go:112] "Node was previously registered" node="multinode-741077"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.686454    3111 kubelet_node_status.go:76] "Successfully registered node" node="multinode-741077"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.689070    3111 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.690038    3111 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.875454    3111 apiserver.go:52] "Watching apiserver"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.878855    3111 topology_manager.go:215] "Topology Admit Handler" podUID="51cf5405-60a0-4f19-a850-ae06b9da9835" podNamespace="kube-system" podName="kindnet-4qbk6"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.879004    3111 topology_manager.go:215] "Topology Admit Handler" podUID="62387ff7-fdfc-42c3-b320-dd0e23eb2d96" podNamespace="kube-system" podName="kube-proxy-mw9ls"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.879058    3111 topology_manager.go:215] "Topology Admit Handler" podUID="5af0cad6-0a64-45f3-91bd-b98cc3b74609" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wj8lb"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.879116    3111 topology_manager.go:215] "Topology Admit Handler" podUID="fa39cd40-fd74-4448-b66f-b88f8730194c" podNamespace="kube-system" podName="storage-provisioner"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.879186    3111 topology_manager.go:215] "Topology Admit Handler" podUID="c4427c4a-ddce-46cd-9a6d-340840c8704f" podNamespace="default" podName="busybox-fc5497c4f-99dqx"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.885903    3111 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957729    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51cf5405-60a0-4f19-a850-ae06b9da9835-xtables-lock\") pod \"kindnet-4qbk6\" (UID: \"51cf5405-60a0-4f19-a850-ae06b9da9835\") " pod="kube-system/kindnet-4qbk6"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957781    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62387ff7-fdfc-42c3-b320-dd0e23eb2d96-xtables-lock\") pod \"kube-proxy-mw9ls\" (UID: \"62387ff7-fdfc-42c3-b320-dd0e23eb2d96\") " pod="kube-system/kube-proxy-mw9ls"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957799    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62387ff7-fdfc-42c3-b320-dd0e23eb2d96-lib-modules\") pod \"kube-proxy-mw9ls\" (UID: \"62387ff7-fdfc-42c3-b320-dd0e23eb2d96\") " pod="kube-system/kube-proxy-mw9ls"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957822    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51cf5405-60a0-4f19-a850-ae06b9da9835-lib-modules\") pod \"kindnet-4qbk6\" (UID: \"51cf5405-60a0-4f19-a850-ae06b9da9835\") " pod="kube-system/kindnet-4qbk6"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957835    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa39cd40-fd74-4448-b66f-b88f8730194c-tmp\") pod \"storage-provisioner\" (UID: \"fa39cd40-fd74-4448-b66f-b88f8730194c\") " pod="kube-system/storage-provisioner"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957870    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/51cf5405-60a0-4f19-a850-ae06b9da9835-cni-cfg\") pod \"kindnet-4qbk6\" (UID: \"51cf5405-60a0-4f19-a850-ae06b9da9835\") " pod="kube-system/kindnet-4qbk6"
	Jul 31 19:10:30 multinode-741077 kubelet[3111]: I0731 19:10:30.518422    3111 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 19:11:18 multinode-741077 kubelet[3111]: E0731 19:11:18.976357    3111 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 19:12:05.213749  433038 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19356-395032/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-741077 -n multinode-741077
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-741077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 stop
E0731 19:13:48.017881  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-741077 stop: exit status 82 (2m0.487646436s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-741077-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-741077 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-741077 status: exit status 3 (18.683752885s)

                                                
                                                
-- stdout --
	multinode-741077
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-741077-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 19:14:28.740761  433699 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.72:22: connect: no route to host
	E0731 19:14:28.740818  433699 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.72:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-741077 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-741077 -n multinode-741077
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-741077 logs -n 25: (1.527688289s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077:/home/docker/cp-test_multinode-741077-m02_multinode-741077.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077 sudo cat                                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m02_multinode-741077.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03:/home/docker/cp-test_multinode-741077-m02_multinode-741077-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077-m03 sudo cat                                   | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m02_multinode-741077-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp testdata/cp-test.txt                                                | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile510041860/001/cp-test_multinode-741077-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077:/home/docker/cp-test_multinode-741077-m03_multinode-741077.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077 sudo cat                                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m03_multinode-741077.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02:/home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077-m02 sudo cat                                   | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-741077 node stop m03                                                          | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:06 UTC |
	| node    | multinode-741077 node start                                                             | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC | 31 Jul 24 19:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC |                     |
	| stop    | -p multinode-741077                                                                     | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC |                     |
	| start   | -p multinode-741077                                                                     | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:08 UTC | 31 Jul 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC |                     |
	| node    | multinode-741077 node delete                                                            | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC | 31 Jul 24 19:12 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-741077 stop                                                                   | multinode-741077 | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:08:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:08:42.936685  431884 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:08:42.936841  431884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:08:42.936852  431884 out.go:304] Setting ErrFile to fd 2...
	I0731 19:08:42.936859  431884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:08:42.937037  431884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:08:42.937588  431884 out.go:298] Setting JSON to false
	I0731 19:08:42.938617  431884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10266,"bootTime":1722442657,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:08:42.938681  431884 start.go:139] virtualization: kvm guest
	I0731 19:08:42.941162  431884 out.go:177] * [multinode-741077] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:08:42.942921  431884 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:08:42.942955  431884 notify.go:220] Checking for updates...
	I0731 19:08:42.945656  431884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:08:42.947015  431884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:08:42.948472  431884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:08:42.950053  431884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:08:42.951481  431884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:08:42.953269  431884 config.go:182] Loaded profile config "multinode-741077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:08:42.953387  431884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:08:42.953882  431884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:08:42.953942  431884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:08:42.970048  431884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I0731 19:08:42.970614  431884 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:08:42.971222  431884 main.go:141] libmachine: Using API Version  1
	I0731 19:08:42.971248  431884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:08:42.971589  431884 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:08:42.971792  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:08:43.007792  431884 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:08:43.009139  431884 start.go:297] selected driver: kvm2
	I0731 19:08:43.009158  431884 start.go:901] validating driver "kvm2" against &{Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:08:43.009325  431884 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:08:43.009698  431884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:08:43.009777  431884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:08:43.025166  431884 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:08:43.025924  431884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:08:43.026007  431884 cni.go:84] Creating CNI manager for ""
	I0731 19:08:43.026026  431884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 19:08:43.026096  431884 start.go:340] cluster config:
	{Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-741077 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:08:43.026260  431884 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:08:43.028901  431884 out.go:177] * Starting "multinode-741077" primary control-plane node in "multinode-741077" cluster
	I0731 19:08:43.030416  431884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:08:43.030465  431884 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:08:43.030477  431884 cache.go:56] Caching tarball of preloaded images
	I0731 19:08:43.030587  431884 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:08:43.030599  431884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:08:43.030723  431884 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/config.json ...
	I0731 19:08:43.030979  431884 start.go:360] acquireMachinesLock for multinode-741077: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:08:43.031029  431884 start.go:364] duration metric: took 27.686µs to acquireMachinesLock for "multinode-741077"
	I0731 19:08:43.031049  431884 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:08:43.031058  431884 fix.go:54] fixHost starting: 
	I0731 19:08:43.031321  431884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:08:43.031357  431884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:08:43.046407  431884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I0731 19:08:43.046857  431884 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:08:43.047334  431884 main.go:141] libmachine: Using API Version  1
	I0731 19:08:43.047361  431884 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:08:43.047797  431884 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:08:43.048039  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:08:43.048231  431884 main.go:141] libmachine: (multinode-741077) Calling .GetState
	I0731 19:08:43.050007  431884 fix.go:112] recreateIfNeeded on multinode-741077: state=Running err=<nil>
	W0731 19:08:43.050026  431884 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:08:43.052167  431884 out.go:177] * Updating the running kvm2 "multinode-741077" VM ...
	I0731 19:08:43.053496  431884 machine.go:94] provisionDockerMachine start ...
	I0731 19:08:43.053530  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:08:43.053772  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.056343  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.056862  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.056892  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.057069  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.057255  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.057389  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.057516  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.057683  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.057931  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.057945  431884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:08:43.161617  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-741077
	
	I0731 19:08:43.161651  431884 main.go:141] libmachine: (multinode-741077) Calling .GetMachineName
	I0731 19:08:43.161898  431884 buildroot.go:166] provisioning hostname "multinode-741077"
	I0731 19:08:43.161924  431884 main.go:141] libmachine: (multinode-741077) Calling .GetMachineName
	I0731 19:08:43.162159  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.165278  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.165669  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.165706  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.165805  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.166044  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.166279  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.166450  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.166662  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.166850  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.166873  431884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-741077 && echo "multinode-741077" | sudo tee /etc/hostname
	I0731 19:08:43.290649  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-741077
	
	I0731 19:08:43.290676  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.293528  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.293916  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.293968  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.294183  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.294398  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.294553  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.294708  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.294882  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.295099  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.295122  431884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-741077' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-741077/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-741077' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:08:43.398057  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:08:43.398112  431884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:08:43.398149  431884 buildroot.go:174] setting up certificates
	I0731 19:08:43.398160  431884 provision.go:84] configureAuth start
	I0731 19:08:43.398174  431884 main.go:141] libmachine: (multinode-741077) Calling .GetMachineName
	I0731 19:08:43.398483  431884 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:08:43.401464  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.401862  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.401892  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.402020  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.404799  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.405349  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.405380  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.405535  431884 provision.go:143] copyHostCerts
	I0731 19:08:43.405563  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:08:43.405590  431884 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:08:43.405599  431884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:08:43.405666  431884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:08:43.405794  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:08:43.405831  431884 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:08:43.405835  431884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:08:43.405863  431884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:08:43.405912  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:08:43.405928  431884 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:08:43.405934  431884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:08:43.405955  431884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:08:43.405999  431884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.multinode-741077 san=[127.0.0.1 192.168.39.55 localhost minikube multinode-741077]
	I0731 19:08:43.702587  431884 provision.go:177] copyRemoteCerts
	I0731 19:08:43.702649  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:08:43.702675  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.705536  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.705883  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.705913  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.706091  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.706410  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.706607  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.706804  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:08:43.791316  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 19:08:43.791410  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:08:43.820288  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 19:08:43.820359  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:08:43.848969  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 19:08:43.849066  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 19:08:43.879761  431884 provision.go:87] duration metric: took 481.585158ms to configureAuth
	I0731 19:08:43.879789  431884 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:08:43.880022  431884 config.go:182] Loaded profile config "multinode-741077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:08:43.880150  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:08:43.883053  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.883383  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:08:43.883401  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:08:43.883615  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:08:43.883892  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.884085  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:08:43.884244  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:08:43.884416  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:08:43.884583  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:08:43.884599  431884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:10:14.767772  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:10:14.767805  431884 machine.go:97] duration metric: took 1m31.714286716s to provisionDockerMachine
	I0731 19:10:14.767826  431884 start.go:293] postStartSetup for "multinode-741077" (driver="kvm2")
	I0731 19:10:14.767855  431884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:10:14.767883  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:14.768253  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:10:14.768300  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:14.772178  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.772674  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:14.772713  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.772898  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:14.773088  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:14.773299  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:14.773454  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:10:14.856337  431884 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:10:14.861107  431884 command_runner.go:130] > NAME=Buildroot
	I0731 19:10:14.861138  431884 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 19:10:14.861142  431884 command_runner.go:130] > ID=buildroot
	I0731 19:10:14.861147  431884 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 19:10:14.861152  431884 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 19:10:14.861186  431884 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:10:14.861199  431884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:10:14.861268  431884 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:10:14.861346  431884 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:10:14.861356  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /etc/ssl/certs/4023132.pem
	I0731 19:10:14.861436  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:10:14.872547  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:10:14.900221  431884 start.go:296] duration metric: took 132.374172ms for postStartSetup
	I0731 19:10:14.900271  431884 fix.go:56] duration metric: took 1m31.869212292s for fixHost
	I0731 19:10:14.900302  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:14.903061  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.903441  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:14.903484  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:14.903646  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:14.903864  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:14.904024  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:14.904155  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:14.904286  431884 main.go:141] libmachine: Using SSH client type: native
	I0731 19:10:14.904516  431884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I0731 19:10:14.904530  431884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:10:15.009569  431884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722453014.984856395
	
	I0731 19:10:15.009594  431884 fix.go:216] guest clock: 1722453014.984856395
	I0731 19:10:15.009604  431884 fix.go:229] Guest: 2024-07-31 19:10:14.984856395 +0000 UTC Remote: 2024-07-31 19:10:14.900278853 +0000 UTC m=+92.000956096 (delta=84.577542ms)
	I0731 19:10:15.009657  431884 fix.go:200] guest clock delta is within tolerance: 84.577542ms
	I0731 19:10:15.009667  431884 start.go:83] releasing machines lock for "multinode-741077", held for 1m31.978625357s
	I0731 19:10:15.009699  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.009977  431884 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:10:15.013169  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.013697  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:15.013725  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.013908  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.014459  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.014664  431884 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:10:15.014770  431884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:10:15.014817  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:15.014943  431884 ssh_runner.go:195] Run: cat /version.json
	I0731 19:10:15.014968  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:10:15.017511  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.017709  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.017975  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:15.018001  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.018079  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:15.018110  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:15.018115  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:15.018309  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:10:15.018345  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:15.018411  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:10:15.018509  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:15.018528  431884 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:10:15.018690  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:10:15.018686  431884 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:10:15.114093  431884 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 19:10:15.114190  431884 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 19:10:15.114275  431884 ssh_runner.go:195] Run: systemctl --version
	I0731 19:10:15.120053  431884 command_runner.go:130] > systemd 252 (252)
	I0731 19:10:15.120109  431884 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 19:10:15.120187  431884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:10:15.285534  431884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 19:10:15.291636  431884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 19:10:15.291723  431884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:10:15.291785  431884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:10:15.301364  431884 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 19:10:15.301395  431884 start.go:495] detecting cgroup driver to use...
	I0731 19:10:15.301477  431884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:10:15.317720  431884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:10:15.332202  431884 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:10:15.332259  431884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:10:15.346838  431884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:10:15.362440  431884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:10:15.514690  431884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:10:15.663447  431884 docker.go:233] disabling docker service ...
	I0731 19:10:15.663533  431884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:10:15.681477  431884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:10:15.695228  431884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:10:15.840827  431884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:10:15.983513  431884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:10:15.998520  431884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:10:16.018656  431884 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 19:10:16.019105  431884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:10:16.019176  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.030159  431884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:10:16.030234  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.042285  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.052684  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.063375  431884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:10:16.074276  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.084806  431884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.096195  431884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:10:16.106717  431884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:10:16.116407  431884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 19:10:16.116486  431884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:10:16.125779  431884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:10:16.267843  431884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:10:16.520510  431884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:10:16.520589  431884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:10:16.525981  431884 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 19:10:16.526008  431884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 19:10:16.526017  431884 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I0731 19:10:16.526026  431884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 19:10:16.526042  431884 command_runner.go:130] > Access: 2024-07-31 19:10:16.395840098 +0000
	I0731 19:10:16.526080  431884 command_runner.go:130] > Modify: 2024-07-31 19:10:16.390839950 +0000
	I0731 19:10:16.526093  431884 command_runner.go:130] > Change: 2024-07-31 19:10:16.390839950 +0000
	I0731 19:10:16.526098  431884 command_runner.go:130] >  Birth: -
	I0731 19:10:16.526125  431884 start.go:563] Will wait 60s for crictl version
	I0731 19:10:16.526182  431884 ssh_runner.go:195] Run: which crictl
	I0731 19:10:16.529997  431884 command_runner.go:130] > /usr/bin/crictl
	I0731 19:10:16.530172  431884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:10:16.569102  431884 command_runner.go:130] > Version:  0.1.0
	I0731 19:10:16.569134  431884 command_runner.go:130] > RuntimeName:  cri-o
	I0731 19:10:16.569141  431884 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 19:10:16.569154  431884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 19:10:16.570412  431884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:10:16.570518  431884 ssh_runner.go:195] Run: crio --version
	I0731 19:10:16.599059  431884 command_runner.go:130] > crio version 1.29.1
	I0731 19:10:16.599090  431884 command_runner.go:130] > Version:        1.29.1
	I0731 19:10:16.599100  431884 command_runner.go:130] > GitCommit:      unknown
	I0731 19:10:16.599107  431884 command_runner.go:130] > GitCommitDate:  unknown
	I0731 19:10:16.599113  431884 command_runner.go:130] > GitTreeState:   clean
	I0731 19:10:16.599123  431884 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 19:10:16.599130  431884 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 19:10:16.599141  431884 command_runner.go:130] > Compiler:       gc
	I0731 19:10:16.599149  431884 command_runner.go:130] > Platform:       linux/amd64
	I0731 19:10:16.599157  431884 command_runner.go:130] > Linkmode:       dynamic
	I0731 19:10:16.599164  431884 command_runner.go:130] > BuildTags:      
	I0731 19:10:16.599171  431884 command_runner.go:130] >   containers_image_ostree_stub
	I0731 19:10:16.599178  431884 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 19:10:16.599188  431884 command_runner.go:130] >   btrfs_noversion
	I0731 19:10:16.599203  431884 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 19:10:16.599213  431884 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 19:10:16.599220  431884 command_runner.go:130] >   seccomp
	I0731 19:10:16.599227  431884 command_runner.go:130] > LDFlags:          unknown
	I0731 19:10:16.599237  431884 command_runner.go:130] > SeccompEnabled:   true
	I0731 19:10:16.599243  431884 command_runner.go:130] > AppArmorEnabled:  false
	I0731 19:10:16.600495  431884 ssh_runner.go:195] Run: crio --version
	I0731 19:10:16.629043  431884 command_runner.go:130] > crio version 1.29.1
	I0731 19:10:16.629066  431884 command_runner.go:130] > Version:        1.29.1
	I0731 19:10:16.629072  431884 command_runner.go:130] > GitCommit:      unknown
	I0731 19:10:16.629078  431884 command_runner.go:130] > GitCommitDate:  unknown
	I0731 19:10:16.629084  431884 command_runner.go:130] > GitTreeState:   clean
	I0731 19:10:16.629107  431884 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 19:10:16.629113  431884 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 19:10:16.629119  431884 command_runner.go:130] > Compiler:       gc
	I0731 19:10:16.629126  431884 command_runner.go:130] > Platform:       linux/amd64
	I0731 19:10:16.629134  431884 command_runner.go:130] > Linkmode:       dynamic
	I0731 19:10:16.629140  431884 command_runner.go:130] > BuildTags:      
	I0731 19:10:16.629147  431884 command_runner.go:130] >   containers_image_ostree_stub
	I0731 19:10:16.629153  431884 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 19:10:16.629162  431884 command_runner.go:130] >   btrfs_noversion
	I0731 19:10:16.629169  431884 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 19:10:16.629179  431884 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 19:10:16.629185  431884 command_runner.go:130] >   seccomp
	I0731 19:10:16.629191  431884 command_runner.go:130] > LDFlags:          unknown
	I0731 19:10:16.629198  431884 command_runner.go:130] > SeccompEnabled:   true
	I0731 19:10:16.629205  431884 command_runner.go:130] > AppArmorEnabled:  false
	I0731 19:10:16.631236  431884 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:10:16.632625  431884 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:10:16.635405  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:16.635761  431884 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:10:16.635791  431884 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:10:16.636070  431884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:10:16.640249  431884 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 19:10:16.640359  431884 kubeadm.go:883] updating cluster {Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:10:16.640539  431884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:10:16.640604  431884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:10:16.687815  431884 command_runner.go:130] > {
	I0731 19:10:16.687838  431884 command_runner.go:130] >   "images": [
	I0731 19:10:16.687842  431884 command_runner.go:130] >     {
	I0731 19:10:16.687850  431884 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 19:10:16.687856  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.687861  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 19:10:16.687865  431884 command_runner.go:130] >       ],
	I0731 19:10:16.687869  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.687877  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 19:10:16.687887  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 19:10:16.687893  431884 command_runner.go:130] >       ],
	I0731 19:10:16.687898  431884 command_runner.go:130] >       "size": "87165492",
	I0731 19:10:16.687904  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.687912  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.687920  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.687928  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.687937  431884 command_runner.go:130] >     },
	I0731 19:10:16.687945  431884 command_runner.go:130] >     {
	I0731 19:10:16.687951  431884 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 19:10:16.687957  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.687963  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 19:10:16.687969  431884 command_runner.go:130] >       ],
	I0731 19:10:16.687973  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.687982  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 19:10:16.687996  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 19:10:16.688003  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688010  431884 command_runner.go:130] >       "size": "87174707",
	I0731 19:10:16.688019  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688033  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688041  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688048  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688052  431884 command_runner.go:130] >     },
	I0731 19:10:16.688058  431884 command_runner.go:130] >     {
	I0731 19:10:16.688064  431884 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 19:10:16.688070  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688075  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 19:10:16.688081  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688085  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688098  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 19:10:16.688113  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 19:10:16.688119  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688128  431884 command_runner.go:130] >       "size": "1363676",
	I0731 19:10:16.688145  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688151  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688155  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688161  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688165  431884 command_runner.go:130] >     },
	I0731 19:10:16.688170  431884 command_runner.go:130] >     {
	I0731 19:10:16.688177  431884 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 19:10:16.688186  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688197  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 19:10:16.688206  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688213  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688227  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 19:10:16.688245  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 19:10:16.688251  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688255  431884 command_runner.go:130] >       "size": "31470524",
	I0731 19:10:16.688263  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688272  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688282  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688294  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688302  431884 command_runner.go:130] >     },
	I0731 19:10:16.688307  431884 command_runner.go:130] >     {
	I0731 19:10:16.688319  431884 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 19:10:16.688327  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688337  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 19:10:16.688343  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688347  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688362  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 19:10:16.688388  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 19:10:16.688397  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688407  431884 command_runner.go:130] >       "size": "61245718",
	I0731 19:10:16.688416  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.688426  431884 command_runner.go:130] >       "username": "nonroot",
	I0731 19:10:16.688435  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688445  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688453  431884 command_runner.go:130] >     },
	I0731 19:10:16.688461  431884 command_runner.go:130] >     {
	I0731 19:10:16.688470  431884 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 19:10:16.688479  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688492  431884 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 19:10:16.688500  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688506  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688515  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 19:10:16.688530  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 19:10:16.688539  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688549  431884 command_runner.go:130] >       "size": "150779692",
	I0731 19:10:16.688558  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.688567  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.688577  431884 command_runner.go:130] >       },
	I0731 19:10:16.688586  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688593  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688598  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688607  431884 command_runner.go:130] >     },
	I0731 19:10:16.688616  431884 command_runner.go:130] >     {
	I0731 19:10:16.688629  431884 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 19:10:16.688640  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688652  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 19:10:16.688660  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688669  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688678  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 19:10:16.688691  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 19:10:16.688700  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688710  431884 command_runner.go:130] >       "size": "117609954",
	I0731 19:10:16.688719  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.688728  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.688736  431884 command_runner.go:130] >       },
	I0731 19:10:16.688743  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688751  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688759  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688765  431884 command_runner.go:130] >     },
	I0731 19:10:16.688769  431884 command_runner.go:130] >     {
	I0731 19:10:16.688779  431884 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 19:10:16.688790  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688802  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 19:10:16.688811  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688820  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688843  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 19:10:16.688855  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 19:10:16.688860  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688867  431884 command_runner.go:130] >       "size": "112198984",
	I0731 19:10:16.688876  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.688885  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.688893  431884 command_runner.go:130] >       },
	I0731 19:10:16.688901  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.688907  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.688914  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.688920  431884 command_runner.go:130] >     },
	I0731 19:10:16.688925  431884 command_runner.go:130] >     {
	I0731 19:10:16.688933  431884 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 19:10:16.688936  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.688941  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 19:10:16.688949  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688954  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.688966  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 19:10:16.688978  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 19:10:16.688986  431884 command_runner.go:130] >       ],
	I0731 19:10:16.688996  431884 command_runner.go:130] >       "size": "85953945",
	I0731 19:10:16.689006  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.689015  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.689022  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.689026  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.689034  431884 command_runner.go:130] >     },
	I0731 19:10:16.689039  431884 command_runner.go:130] >     {
	I0731 19:10:16.689053  431884 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 19:10:16.689063  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.689074  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 19:10:16.689083  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689092  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.689103  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 19:10:16.689113  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 19:10:16.689118  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689127  431884 command_runner.go:130] >       "size": "63051080",
	I0731 19:10:16.689138  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.689147  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.689155  431884 command_runner.go:130] >       },
	I0731 19:10:16.689164  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.689173  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.689183  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.689190  431884 command_runner.go:130] >     },
	I0731 19:10:16.689194  431884 command_runner.go:130] >     {
	I0731 19:10:16.689201  431884 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 19:10:16.689210  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.689220  431884 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 19:10:16.689228  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689235  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.689249  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 19:10:16.689263  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 19:10:16.689272  431884 command_runner.go:130] >       ],
	I0731 19:10:16.689279  431884 command_runner.go:130] >       "size": "750414",
	I0731 19:10:16.689283  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.689289  431884 command_runner.go:130] >         "value": "65535"
	I0731 19:10:16.689298  431884 command_runner.go:130] >       },
	I0731 19:10:16.689305  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.689314  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.689324  431884 command_runner.go:130] >       "pinned": true
	I0731 19:10:16.689332  431884 command_runner.go:130] >     }
	I0731 19:10:16.689341  431884 command_runner.go:130] >   ]
	I0731 19:10:16.689349  431884 command_runner.go:130] > }
	I0731 19:10:16.689944  431884 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:10:16.689971  431884 crio.go:433] Images already preloaded, skipping extraction
	I0731 19:10:16.690050  431884 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:10:16.724280  431884 command_runner.go:130] > {
	I0731 19:10:16.724305  431884 command_runner.go:130] >   "images": [
	I0731 19:10:16.724310  431884 command_runner.go:130] >     {
	I0731 19:10:16.724322  431884 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 19:10:16.724328  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724336  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 19:10:16.724341  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724347  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724358  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 19:10:16.724369  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 19:10:16.724390  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724401  431884 command_runner.go:130] >       "size": "87165492",
	I0731 19:10:16.724411  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724418  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724429  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724438  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724444  431884 command_runner.go:130] >     },
	I0731 19:10:16.724450  431884 command_runner.go:130] >     {
	I0731 19:10:16.724461  431884 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 19:10:16.724471  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724482  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 19:10:16.724494  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724502  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724517  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 19:10:16.724532  431884 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 19:10:16.724544  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724555  431884 command_runner.go:130] >       "size": "87174707",
	I0731 19:10:16.724564  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724574  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724584  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724591  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724599  431884 command_runner.go:130] >     },
	I0731 19:10:16.724605  431884 command_runner.go:130] >     {
	I0731 19:10:16.724619  431884 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 19:10:16.724629  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724639  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 19:10:16.724662  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724671  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724684  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 19:10:16.724699  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 19:10:16.724707  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724716  431884 command_runner.go:130] >       "size": "1363676",
	I0731 19:10:16.724724  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724732  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724758  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724767  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724772  431884 command_runner.go:130] >     },
	I0731 19:10:16.724776  431884 command_runner.go:130] >     {
	I0731 19:10:16.724786  431884 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 19:10:16.724796  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724806  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 19:10:16.724813  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724822  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724838  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 19:10:16.724861  431884 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 19:10:16.724869  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724877  431884 command_runner.go:130] >       "size": "31470524",
	I0731 19:10:16.724889  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.724899  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.724906  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.724915  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.724923  431884 command_runner.go:130] >     },
	I0731 19:10:16.724929  431884 command_runner.go:130] >     {
	I0731 19:10:16.724943  431884 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 19:10:16.724952  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.724960  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 19:10:16.724965  431884 command_runner.go:130] >       ],
	I0731 19:10:16.724972  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.724988  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 19:10:16.725003  431884 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 19:10:16.725011  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725019  431884 command_runner.go:130] >       "size": "61245718",
	I0731 19:10:16.725028  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.725037  431884 command_runner.go:130] >       "username": "nonroot",
	I0731 19:10:16.725047  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725054  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725062  431884 command_runner.go:130] >     },
	I0731 19:10:16.725068  431884 command_runner.go:130] >     {
	I0731 19:10:16.725079  431884 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 19:10:16.725089  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725099  431884 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 19:10:16.725124  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725135  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725149  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 19:10:16.725163  431884 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 19:10:16.725172  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725179  431884 command_runner.go:130] >       "size": "150779692",
	I0731 19:10:16.725188  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725195  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725208  431884 command_runner.go:130] >       },
	I0731 19:10:16.725219  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725228  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725236  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725247  431884 command_runner.go:130] >     },
	I0731 19:10:16.725255  431884 command_runner.go:130] >     {
	I0731 19:10:16.725266  431884 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 19:10:16.725275  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725283  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 19:10:16.725292  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725299  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725315  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 19:10:16.725330  431884 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 19:10:16.725339  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725347  431884 command_runner.go:130] >       "size": "117609954",
	I0731 19:10:16.725355  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725362  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725370  431884 command_runner.go:130] >       },
	I0731 19:10:16.725377  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725387  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725395  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725403  431884 command_runner.go:130] >     },
	I0731 19:10:16.725409  431884 command_runner.go:130] >     {
	I0731 19:10:16.725421  431884 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 19:10:16.725429  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725439  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 19:10:16.725447  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725455  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725480  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 19:10:16.725502  431884 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 19:10:16.725509  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725516  431884 command_runner.go:130] >       "size": "112198984",
	I0731 19:10:16.725525  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725532  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725541  431884 command_runner.go:130] >       },
	I0731 19:10:16.725548  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725555  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725561  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725569  431884 command_runner.go:130] >     },
	I0731 19:10:16.725575  431884 command_runner.go:130] >     {
	I0731 19:10:16.725590  431884 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 19:10:16.725599  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725609  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 19:10:16.725617  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725624  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725639  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 19:10:16.725658  431884 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 19:10:16.725667  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725674  431884 command_runner.go:130] >       "size": "85953945",
	I0731 19:10:16.725684  431884 command_runner.go:130] >       "uid": null,
	I0731 19:10:16.725691  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725700  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725707  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725715  431884 command_runner.go:130] >     },
	I0731 19:10:16.725722  431884 command_runner.go:130] >     {
	I0731 19:10:16.725735  431884 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 19:10:16.725745  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725757  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 19:10:16.725765  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725772  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725786  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 19:10:16.725802  431884 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 19:10:16.725810  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725818  431884 command_runner.go:130] >       "size": "63051080",
	I0731 19:10:16.725826  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725834  431884 command_runner.go:130] >         "value": "0"
	I0731 19:10:16.725842  431884 command_runner.go:130] >       },
	I0731 19:10:16.725849  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.725858  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.725864  431884 command_runner.go:130] >       "pinned": false
	I0731 19:10:16.725869  431884 command_runner.go:130] >     },
	I0731 19:10:16.725875  431884 command_runner.go:130] >     {
	I0731 19:10:16.725888  431884 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 19:10:16.725897  431884 command_runner.go:130] >       "repoTags": [
	I0731 19:10:16.725905  431884 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 19:10:16.725914  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725923  431884 command_runner.go:130] >       "repoDigests": [
	I0731 19:10:16.725938  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 19:10:16.725953  431884 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 19:10:16.725961  431884 command_runner.go:130] >       ],
	I0731 19:10:16.725969  431884 command_runner.go:130] >       "size": "750414",
	I0731 19:10:16.725977  431884 command_runner.go:130] >       "uid": {
	I0731 19:10:16.725985  431884 command_runner.go:130] >         "value": "65535"
	I0731 19:10:16.725993  431884 command_runner.go:130] >       },
	I0731 19:10:16.726001  431884 command_runner.go:130] >       "username": "",
	I0731 19:10:16.726010  431884 command_runner.go:130] >       "spec": null,
	I0731 19:10:16.726018  431884 command_runner.go:130] >       "pinned": true
	I0731 19:10:16.726026  431884 command_runner.go:130] >     }
	I0731 19:10:16.726034  431884 command_runner.go:130] >   ]
	I0731 19:10:16.726041  431884 command_runner.go:130] > }
	I0731 19:10:16.726173  431884 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:10:16.726186  431884 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:10:16.726196  431884 kubeadm.go:934] updating node { 192.168.39.55 8443 v1.30.3 crio true true} ...
	I0731 19:10:16.726333  431884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-741077 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:10:16.726428  431884 ssh_runner.go:195] Run: crio config
	I0731 19:10:16.768123  431884 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 19:10:16.768179  431884 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 19:10:16.768187  431884 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 19:10:16.768190  431884 command_runner.go:130] > #
	I0731 19:10:16.768200  431884 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 19:10:16.768206  431884 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 19:10:16.768212  431884 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 19:10:16.768218  431884 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 19:10:16.768222  431884 command_runner.go:130] > # reload'.
	I0731 19:10:16.768228  431884 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 19:10:16.768234  431884 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 19:10:16.768244  431884 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 19:10:16.768256  431884 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 19:10:16.768263  431884 command_runner.go:130] > [crio]
	I0731 19:10:16.768273  431884 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 19:10:16.768280  431884 command_runner.go:130] > # containers images, in this directory.
	I0731 19:10:16.768298  431884 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 19:10:16.768319  431884 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 19:10:16.768332  431884 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 19:10:16.768341  431884 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 19:10:16.768561  431884 command_runner.go:130] > # imagestore = ""
	I0731 19:10:16.768580  431884 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 19:10:16.768590  431884 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 19:10:16.768703  431884 command_runner.go:130] > storage_driver = "overlay"
	I0731 19:10:16.768718  431884 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 19:10:16.768726  431884 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 19:10:16.768732  431884 command_runner.go:130] > storage_option = [
	I0731 19:10:16.769386  431884 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 19:10:16.769404  431884 command_runner.go:130] > ]
	I0731 19:10:16.769414  431884 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 19:10:16.769438  431884 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 19:10:16.769490  431884 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 19:10:16.769510  431884 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 19:10:16.769518  431884 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 19:10:16.769527  431884 command_runner.go:130] > # always happen on a node reboot
	I0731 19:10:16.769533  431884 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 19:10:16.769550  431884 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 19:10:16.769558  431884 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 19:10:16.769567  431884 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 19:10:16.769574  431884 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 19:10:16.769587  431884 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 19:10:16.769600  431884 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 19:10:16.769609  431884 command_runner.go:130] > # internal_wipe = true
	I0731 19:10:16.769619  431884 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 19:10:16.769630  431884 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 19:10:16.769640  431884 command_runner.go:130] > # internal_repair = false
	I0731 19:10:16.769653  431884 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 19:10:16.769665  431884 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 19:10:16.769676  431884 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 19:10:16.769686  431884 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 19:10:16.769697  431884 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 19:10:16.769704  431884 command_runner.go:130] > [crio.api]
	I0731 19:10:16.769713  431884 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 19:10:16.769723  431884 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 19:10:16.769734  431884 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 19:10:16.769742  431884 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 19:10:16.769753  431884 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 19:10:16.769763  431884 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 19:10:16.769770  431884 command_runner.go:130] > # stream_port = "0"
	I0731 19:10:16.769780  431884 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 19:10:16.769792  431884 command_runner.go:130] > # stream_enable_tls = false
	I0731 19:10:16.769803  431884 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 19:10:16.769811  431884 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 19:10:16.769825  431884 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 19:10:16.769836  431884 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 19:10:16.769843  431884 command_runner.go:130] > # minutes.
	I0731 19:10:16.769853  431884 command_runner.go:130] > # stream_tls_cert = ""
	I0731 19:10:16.769869  431884 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 19:10:16.769890  431884 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 19:10:16.769902  431884 command_runner.go:130] > # stream_tls_key = ""
	I0731 19:10:16.769915  431884 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 19:10:16.769927  431884 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 19:10:16.769947  431884 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 19:10:16.769958  431884 command_runner.go:130] > # stream_tls_ca = ""
	I0731 19:10:16.769970  431884 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 19:10:16.769978  431884 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 19:10:16.769992  431884 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 19:10:16.770004  431884 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 19:10:16.770014  431884 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 19:10:16.770026  431884 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 19:10:16.770034  431884 command_runner.go:130] > [crio.runtime]
	I0731 19:10:16.770042  431884 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 19:10:16.770049  431884 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 19:10:16.770057  431884 command_runner.go:130] > # "nofile=1024:2048"
	I0731 19:10:16.770067  431884 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 19:10:16.770080  431884 command_runner.go:130] > # default_ulimits = [
	I0731 19:10:16.770085  431884 command_runner.go:130] > # ]
	I0731 19:10:16.770095  431884 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 19:10:16.770101  431884 command_runner.go:130] > # no_pivot = false
	I0731 19:10:16.770111  431884 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 19:10:16.770123  431884 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 19:10:16.770132  431884 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 19:10:16.770145  431884 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 19:10:16.770158  431884 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 19:10:16.770170  431884 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 19:10:16.770182  431884 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 19:10:16.770188  431884 command_runner.go:130] > # Cgroup setting for conmon
	I0731 19:10:16.770201  431884 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 19:10:16.770210  431884 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 19:10:16.770221  431884 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 19:10:16.770234  431884 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 19:10:16.770247  431884 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 19:10:16.770256  431884 command_runner.go:130] > conmon_env = [
	I0731 19:10:16.770269  431884 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 19:10:16.770283  431884 command_runner.go:130] > ]
	I0731 19:10:16.770292  431884 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 19:10:16.770302  431884 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 19:10:16.770311  431884 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 19:10:16.770321  431884 command_runner.go:130] > # default_env = [
	I0731 19:10:16.770326  431884 command_runner.go:130] > # ]
	I0731 19:10:16.770337  431884 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 19:10:16.770373  431884 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 19:10:16.770387  431884 command_runner.go:130] > # selinux = false
	I0731 19:10:16.770398  431884 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 19:10:16.770413  431884 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 19:10:16.770423  431884 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 19:10:16.770430  431884 command_runner.go:130] > # seccomp_profile = ""
	I0731 19:10:16.770438  431884 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 19:10:16.770450  431884 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 19:10:16.770459  431884 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 19:10:16.770470  431884 command_runner.go:130] > # which might increase security.
	I0731 19:10:16.770479  431884 command_runner.go:130] > # This option is currently deprecated,
	I0731 19:10:16.770492  431884 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 19:10:16.770502  431884 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 19:10:16.770512  431884 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 19:10:16.770525  431884 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 19:10:16.770536  431884 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 19:10:16.770549  431884 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 19:10:16.770560  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.770568  431884 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 19:10:16.770581  431884 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 19:10:16.770588  431884 command_runner.go:130] > # the cgroup blockio controller.
	I0731 19:10:16.770599  431884 command_runner.go:130] > # blockio_config_file = ""
	I0731 19:10:16.770614  431884 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 19:10:16.770624  431884 command_runner.go:130] > # blockio parameters.
	I0731 19:10:16.770634  431884 command_runner.go:130] > # blockio_reload = false
	I0731 19:10:16.770645  431884 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 19:10:16.770653  431884 command_runner.go:130] > # irqbalance daemon.
	I0731 19:10:16.770661  431884 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 19:10:16.770673  431884 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 19:10:16.770683  431884 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 19:10:16.770697  431884 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 19:10:16.770740  431884 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 19:10:16.770750  431884 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 19:10:16.770758  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.770764  431884 command_runner.go:130] > # rdt_config_file = ""
	I0731 19:10:16.770772  431884 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 19:10:16.770780  431884 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 19:10:16.770804  431884 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 19:10:16.770814  431884 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 19:10:16.770823  431884 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 19:10:16.770835  431884 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 19:10:16.770841  431884 command_runner.go:130] > # will be added.
	I0731 19:10:16.770846  431884 command_runner.go:130] > # default_capabilities = [
	I0731 19:10:16.770855  431884 command_runner.go:130] > # 	"CHOWN",
	I0731 19:10:16.770861  431884 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 19:10:16.770866  431884 command_runner.go:130] > # 	"FSETID",
	I0731 19:10:16.770872  431884 command_runner.go:130] > # 	"FOWNER",
	I0731 19:10:16.770877  431884 command_runner.go:130] > # 	"SETGID",
	I0731 19:10:16.770890  431884 command_runner.go:130] > # 	"SETUID",
	I0731 19:10:16.770899  431884 command_runner.go:130] > # 	"SETPCAP",
	I0731 19:10:16.770905  431884 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 19:10:16.770914  431884 command_runner.go:130] > # 	"KILL",
	I0731 19:10:16.770920  431884 command_runner.go:130] > # ]
	I0731 19:10:16.770933  431884 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 19:10:16.770946  431884 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 19:10:16.770954  431884 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 19:10:16.770965  431884 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 19:10:16.770976  431884 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 19:10:16.770983  431884 command_runner.go:130] > default_sysctls = [
	I0731 19:10:16.770993  431884 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 19:10:16.771000  431884 command_runner.go:130] > ]
	I0731 19:10:16.771012  431884 command_runner.go:130] > # List of devices on the host that a
	I0731 19:10:16.771022  431884 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 19:10:16.771029  431884 command_runner.go:130] > # allowed_devices = [
	I0731 19:10:16.771036  431884 command_runner.go:130] > # 	"/dev/fuse",
	I0731 19:10:16.771045  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771052  431884 command_runner.go:130] > # List of additional devices. specified as
	I0731 19:10:16.771064  431884 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 19:10:16.771075  431884 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 19:10:16.771084  431884 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 19:10:16.771092  431884 command_runner.go:130] > # additional_devices = [
	I0731 19:10:16.771097  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771107  431884 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 19:10:16.771119  431884 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 19:10:16.771127  431884 command_runner.go:130] > # 	"/etc/cdi",
	I0731 19:10:16.771132  431884 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 19:10:16.771140  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771152  431884 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 19:10:16.771164  431884 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 19:10:16.771173  431884 command_runner.go:130] > # Defaults to false.
	I0731 19:10:16.771181  431884 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 19:10:16.771195  431884 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 19:10:16.771206  431884 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 19:10:16.771215  431884 command_runner.go:130] > # hooks_dir = [
	I0731 19:10:16.771222  431884 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 19:10:16.771230  431884 command_runner.go:130] > # ]
	I0731 19:10:16.771238  431884 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 19:10:16.771250  431884 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 19:10:16.771258  431884 command_runner.go:130] > # its default mounts from the following two files:
	I0731 19:10:16.771264  431884 command_runner.go:130] > #
	I0731 19:10:16.771274  431884 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 19:10:16.771287  431884 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 19:10:16.771299  431884 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 19:10:16.771305  431884 command_runner.go:130] > #
	I0731 19:10:16.771316  431884 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 19:10:16.771329  431884 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 19:10:16.771345  431884 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 19:10:16.771356  431884 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 19:10:16.771361  431884 command_runner.go:130] > #
	I0731 19:10:16.771368  431884 command_runner.go:130] > # default_mounts_file = ""
	I0731 19:10:16.771379  431884 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 19:10:16.771388  431884 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 19:10:16.771397  431884 command_runner.go:130] > pids_limit = 1024
	I0731 19:10:16.771408  431884 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 19:10:16.771421  431884 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 19:10:16.771436  431884 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 19:10:16.771452  431884 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 19:10:16.771461  431884 command_runner.go:130] > # log_size_max = -1
	I0731 19:10:16.771471  431884 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 19:10:16.771478  431884 command_runner.go:130] > # log_to_journald = false
	I0731 19:10:16.771489  431884 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 19:10:16.771500  431884 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 19:10:16.771515  431884 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 19:10:16.771527  431884 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 19:10:16.771538  431884 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 19:10:16.771546  431884 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 19:10:16.771555  431884 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 19:10:16.771567  431884 command_runner.go:130] > # read_only = false
	I0731 19:10:16.771577  431884 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 19:10:16.771590  431884 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 19:10:16.771600  431884 command_runner.go:130] > # live configuration reload.
	I0731 19:10:16.771607  431884 command_runner.go:130] > # log_level = "info"
	I0731 19:10:16.771620  431884 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 19:10:16.771631  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.771637  431884 command_runner.go:130] > # log_filter = ""
	I0731 19:10:16.771645  431884 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 19:10:16.771657  431884 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 19:10:16.771663  431884 command_runner.go:130] > # separated by comma.
	I0731 19:10:16.771678  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771687  431884 command_runner.go:130] > # uid_mappings = ""
	I0731 19:10:16.771696  431884 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 19:10:16.771708  431884 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 19:10:16.771719  431884 command_runner.go:130] > # separated by comma.
	I0731 19:10:16.771733  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771743  431884 command_runner.go:130] > # gid_mappings = ""
	I0731 19:10:16.771753  431884 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 19:10:16.771766  431884 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 19:10:16.771776  431884 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 19:10:16.771791  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771802  431884 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 19:10:16.771812  431884 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 19:10:16.771824  431884 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 19:10:16.771838  431884 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 19:10:16.771853  431884 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 19:10:16.771862  431884 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 19:10:16.771871  431884 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 19:10:16.771891  431884 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 19:10:16.771904  431884 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 19:10:16.771920  431884 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 19:10:16.771931  431884 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 19:10:16.771945  431884 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 19:10:16.771955  431884 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 19:10:16.771963  431884 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 19:10:16.771974  431884 command_runner.go:130] > drop_infra_ctr = false
	I0731 19:10:16.771987  431884 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 19:10:16.771998  431884 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 19:10:16.772009  431884 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 19:10:16.772015  431884 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 19:10:16.772026  431884 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 19:10:16.772040  431884 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 19:10:16.772054  431884 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 19:10:16.772066  431884 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 19:10:16.772077  431884 command_runner.go:130] > # shared_cpuset = ""
	I0731 19:10:16.772090  431884 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 19:10:16.772101  431884 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 19:10:16.772112  431884 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 19:10:16.772127  431884 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 19:10:16.772136  431884 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 19:10:16.772147  431884 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 19:10:16.772161  431884 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 19:10:16.772168  431884 command_runner.go:130] > # enable_criu_support = false
	I0731 19:10:16.772180  431884 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 19:10:16.772193  431884 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 19:10:16.772202  431884 command_runner.go:130] > # enable_pod_events = false
	I0731 19:10:16.772213  431884 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 19:10:16.772227  431884 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 19:10:16.772238  431884 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 19:10:16.772247  431884 command_runner.go:130] > # default_runtime = "runc"
	I0731 19:10:16.772255  431884 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 19:10:16.772269  431884 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 19:10:16.772287  431884 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 19:10:16.772299  431884 command_runner.go:130] > # creation as a file is not desired either.
	I0731 19:10:16.772316  431884 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 19:10:16.772341  431884 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 19:10:16.772351  431884 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 19:10:16.772359  431884 command_runner.go:130] > # ]
	I0731 19:10:16.772370  431884 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 19:10:16.772394  431884 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 19:10:16.772406  431884 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 19:10:16.772417  431884 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 19:10:16.772422  431884 command_runner.go:130] > #
	I0731 19:10:16.772429  431884 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 19:10:16.772439  431884 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 19:10:16.772465  431884 command_runner.go:130] > # runtime_type = "oci"
	I0731 19:10:16.772476  431884 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 19:10:16.772484  431884 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 19:10:16.772493  431884 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 19:10:16.772499  431884 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 19:10:16.772508  431884 command_runner.go:130] > # monitor_env = []
	I0731 19:10:16.772514  431884 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 19:10:16.772521  431884 command_runner.go:130] > # allowed_annotations = []
	I0731 19:10:16.772531  431884 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 19:10:16.772539  431884 command_runner.go:130] > # Where:
	I0731 19:10:16.772546  431884 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 19:10:16.772558  431884 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 19:10:16.772570  431884 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 19:10:16.772582  431884 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 19:10:16.772590  431884 command_runner.go:130] > #   in $PATH.
	I0731 19:10:16.772600  431884 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 19:10:16.772610  431884 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 19:10:16.772621  431884 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 19:10:16.772629  431884 command_runner.go:130] > #   state.
	I0731 19:10:16.772639  431884 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 19:10:16.772651  431884 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 19:10:16.772663  431884 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 19:10:16.772674  431884 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 19:10:16.772687  431884 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 19:10:16.772698  431884 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 19:10:16.772708  431884 command_runner.go:130] > #   The currently recognized values are:
	I0731 19:10:16.772718  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 19:10:16.772732  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 19:10:16.772749  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 19:10:16.772761  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 19:10:16.772774  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 19:10:16.772787  431884 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 19:10:16.772799  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 19:10:16.772809  431884 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 19:10:16.772821  431884 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 19:10:16.772833  431884 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 19:10:16.772843  431884 command_runner.go:130] > #   deprecated option "conmon".
	I0731 19:10:16.772854  431884 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 19:10:16.772865  431884 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 19:10:16.772877  431884 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 19:10:16.772893  431884 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 19:10:16.772906  431884 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 19:10:16.772917  431884 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 19:10:16.772930  431884 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 19:10:16.772941  431884 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 19:10:16.772948  431884 command_runner.go:130] > #
	I0731 19:10:16.772954  431884 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 19:10:16.772962  431884 command_runner.go:130] > #
	I0731 19:10:16.772969  431884 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 19:10:16.772977  431884 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 19:10:16.772980  431884 command_runner.go:130] > #
	I0731 19:10:16.772985  431884 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 19:10:16.772993  431884 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 19:10:16.772997  431884 command_runner.go:130] > #
	I0731 19:10:16.773004  431884 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 19:10:16.773007  431884 command_runner.go:130] > # feature.
	I0731 19:10:16.773010  431884 command_runner.go:130] > #
	I0731 19:10:16.773016  431884 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 19:10:16.773024  431884 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 19:10:16.773030  431884 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 19:10:16.773038  431884 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 19:10:16.773043  431884 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 19:10:16.773048  431884 command_runner.go:130] > #
	I0731 19:10:16.773056  431884 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 19:10:16.773066  431884 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 19:10:16.773069  431884 command_runner.go:130] > #
	I0731 19:10:16.773075  431884 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 19:10:16.773082  431884 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 19:10:16.773086  431884 command_runner.go:130] > #
	I0731 19:10:16.773092  431884 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 19:10:16.773098  431884 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 19:10:16.773106  431884 command_runner.go:130] > # limitation.
	I0731 19:10:16.773110  431884 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 19:10:16.773114  431884 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 19:10:16.773118  431884 command_runner.go:130] > runtime_type = "oci"
	I0731 19:10:16.773123  431884 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 19:10:16.773126  431884 command_runner.go:130] > runtime_config_path = ""
	I0731 19:10:16.773134  431884 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 19:10:16.773139  431884 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 19:10:16.773145  431884 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 19:10:16.773148  431884 command_runner.go:130] > monitor_env = [
	I0731 19:10:16.773153  431884 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 19:10:16.773156  431884 command_runner.go:130] > ]
	I0731 19:10:16.773162  431884 command_runner.go:130] > privileged_without_host_devices = false
	I0731 19:10:16.773171  431884 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 19:10:16.773176  431884 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 19:10:16.773184  431884 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 19:10:16.773191  431884 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 19:10:16.773200  431884 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 19:10:16.773205  431884 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 19:10:16.773216  431884 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 19:10:16.773225  431884 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 19:10:16.773231  431884 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 19:10:16.773240  431884 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 19:10:16.773243  431884 command_runner.go:130] > # Example:
	I0731 19:10:16.773248  431884 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 19:10:16.773252  431884 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 19:10:16.773256  431884 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 19:10:16.773261  431884 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 19:10:16.773264  431884 command_runner.go:130] > # cpuset = 0
	I0731 19:10:16.773268  431884 command_runner.go:130] > # cpushares = "0-1"
	I0731 19:10:16.773271  431884 command_runner.go:130] > # Where:
	I0731 19:10:16.773278  431884 command_runner.go:130] > # The workload name is workload-type.
	I0731 19:10:16.773285  431884 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 19:10:16.773290  431884 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 19:10:16.773294  431884 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 19:10:16.773301  431884 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 19:10:16.773306  431884 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 19:10:16.773310  431884 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 19:10:16.773316  431884 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 19:10:16.773320  431884 command_runner.go:130] > # Default value is set to true
	I0731 19:10:16.773324  431884 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 19:10:16.773332  431884 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 19:10:16.773336  431884 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 19:10:16.773340  431884 command_runner.go:130] > # Default value is set to 'false'
	I0731 19:10:16.773347  431884 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 19:10:16.773353  431884 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 19:10:16.773358  431884 command_runner.go:130] > #
	I0731 19:10:16.773363  431884 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 19:10:16.773369  431884 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 19:10:16.773377  431884 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 19:10:16.773383  431884 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 19:10:16.773391  431884 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 19:10:16.773395  431884 command_runner.go:130] > [crio.image]
	I0731 19:10:16.773402  431884 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 19:10:16.773410  431884 command_runner.go:130] > # default_transport = "docker://"
	I0731 19:10:16.773419  431884 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 19:10:16.773432  431884 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 19:10:16.773441  431884 command_runner.go:130] > # global_auth_file = ""
	I0731 19:10:16.773450  431884 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 19:10:16.773461  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.773467  431884 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 19:10:16.773475  431884 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 19:10:16.773481  431884 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 19:10:16.773488  431884 command_runner.go:130] > # This option supports live configuration reload.
	I0731 19:10:16.773492  431884 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 19:10:16.773499  431884 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 19:10:16.773505  431884 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 19:10:16.773518  431884 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 19:10:16.773526  431884 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 19:10:16.773529  431884 command_runner.go:130] > # pause_command = "/pause"
	I0731 19:10:16.773535  431884 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 19:10:16.773542  431884 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 19:10:16.773548  431884 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 19:10:16.773553  431884 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 19:10:16.773560  431884 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 19:10:16.773566  431884 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 19:10:16.773571  431884 command_runner.go:130] > # pinned_images = [
	I0731 19:10:16.773575  431884 command_runner.go:130] > # ]
	I0731 19:10:16.773581  431884 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 19:10:16.773589  431884 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 19:10:16.773595  431884 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 19:10:16.773603  431884 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 19:10:16.773607  431884 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 19:10:16.773612  431884 command_runner.go:130] > # signature_policy = ""
	I0731 19:10:16.773618  431884 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 19:10:16.773626  431884 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 19:10:16.773632  431884 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 19:10:16.773640  431884 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 19:10:16.773645  431884 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 19:10:16.773652  431884 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 19:10:16.773657  431884 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 19:10:16.773665  431884 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 19:10:16.773670  431884 command_runner.go:130] > # changing them here.
	I0731 19:10:16.773679  431884 command_runner.go:130] > # insecure_registries = [
	I0731 19:10:16.773684  431884 command_runner.go:130] > # ]
	I0731 19:10:16.773697  431884 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 19:10:16.773708  431884 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 19:10:16.773718  431884 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 19:10:16.773726  431884 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 19:10:16.773736  431884 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 19:10:16.773746  431884 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 19:10:16.773751  431884 command_runner.go:130] > # CNI plugins.
	I0731 19:10:16.773755  431884 command_runner.go:130] > [crio.network]
	I0731 19:10:16.773763  431884 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 19:10:16.773771  431884 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 19:10:16.773777  431884 command_runner.go:130] > # cni_default_network = ""
	I0731 19:10:16.773783  431884 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 19:10:16.773789  431884 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 19:10:16.773794  431884 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 19:10:16.773798  431884 command_runner.go:130] > # plugin_dirs = [
	I0731 19:10:16.773804  431884 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 19:10:16.773807  431884 command_runner.go:130] > # ]
	I0731 19:10:16.773813  431884 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 19:10:16.773818  431884 command_runner.go:130] > [crio.metrics]
	I0731 19:10:16.773823  431884 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 19:10:16.773829  431884 command_runner.go:130] > enable_metrics = true
	I0731 19:10:16.773833  431884 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 19:10:16.773845  431884 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 19:10:16.773851  431884 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 19:10:16.773857  431884 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 19:10:16.773863  431884 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 19:10:16.773869  431884 command_runner.go:130] > # metrics_collectors = [
	I0731 19:10:16.773873  431884 command_runner.go:130] > # 	"operations",
	I0731 19:10:16.773877  431884 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 19:10:16.773882  431884 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 19:10:16.773891  431884 command_runner.go:130] > # 	"operations_errors",
	I0731 19:10:16.773895  431884 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 19:10:16.773900  431884 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 19:10:16.773904  431884 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 19:10:16.773910  431884 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 19:10:16.773914  431884 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 19:10:16.773920  431884 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 19:10:16.773924  431884 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 19:10:16.773928  431884 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 19:10:16.773932  431884 command_runner.go:130] > # 	"containers_oom_total",
	I0731 19:10:16.773936  431884 command_runner.go:130] > # 	"containers_oom",
	I0731 19:10:16.773940  431884 command_runner.go:130] > # 	"processes_defunct",
	I0731 19:10:16.773946  431884 command_runner.go:130] > # 	"operations_total",
	I0731 19:10:16.773953  431884 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 19:10:16.773959  431884 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 19:10:16.773968  431884 command_runner.go:130] > # 	"operations_errors_total",
	I0731 19:10:16.773976  431884 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 19:10:16.773986  431884 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 19:10:16.773995  431884 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 19:10:16.774002  431884 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 19:10:16.774013  431884 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 19:10:16.774017  431884 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 19:10:16.774022  431884 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 19:10:16.774029  431884 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 19:10:16.774036  431884 command_runner.go:130] > # ]
	I0731 19:10:16.774043  431884 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 19:10:16.774047  431884 command_runner.go:130] > # metrics_port = 9090
	I0731 19:10:16.774053  431884 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 19:10:16.774057  431884 command_runner.go:130] > # metrics_socket = ""
	I0731 19:10:16.774063  431884 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 19:10:16.774069  431884 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 19:10:16.774082  431884 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 19:10:16.774094  431884 command_runner.go:130] > # certificate on any modification event.
	I0731 19:10:16.774104  431884 command_runner.go:130] > # metrics_cert = ""
	I0731 19:10:16.774113  431884 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 19:10:16.774120  431884 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 19:10:16.774124  431884 command_runner.go:130] > # metrics_key = ""
	I0731 19:10:16.774130  431884 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 19:10:16.774135  431884 command_runner.go:130] > [crio.tracing]
	I0731 19:10:16.774141  431884 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 19:10:16.774148  431884 command_runner.go:130] > # enable_tracing = false
	I0731 19:10:16.774153  431884 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 19:10:16.774159  431884 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 19:10:16.774168  431884 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 19:10:16.774179  431884 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 19:10:16.774188  431884 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 19:10:16.774197  431884 command_runner.go:130] > [crio.nri]
	I0731 19:10:16.774207  431884 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 19:10:16.774214  431884 command_runner.go:130] > # enable_nri = false
	I0731 19:10:16.774218  431884 command_runner.go:130] > # NRI socket to listen on.
	I0731 19:10:16.774224  431884 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 19:10:16.774229  431884 command_runner.go:130] > # NRI plugin directory to use.
	I0731 19:10:16.774237  431884 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 19:10:16.774242  431884 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 19:10:16.774248  431884 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 19:10:16.774257  431884 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 19:10:16.774267  431884 command_runner.go:130] > # nri_disable_connections = false
	I0731 19:10:16.774275  431884 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 19:10:16.774285  431884 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 19:10:16.774296  431884 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 19:10:16.774306  431884 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 19:10:16.774318  431884 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 19:10:16.774326  431884 command_runner.go:130] > [crio.stats]
	I0731 19:10:16.774332  431884 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 19:10:16.774338  431884 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 19:10:16.774343  431884 command_runner.go:130] > # stats_collection_period = 0
	I0731 19:10:16.774376  431884 command_runner.go:130] ! time="2024-07-31 19:10:16.735496664Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 19:10:16.774402  431884 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 19:10:16.774545  431884 cni.go:84] Creating CNI manager for ""
	I0731 19:10:16.774555  431884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 19:10:16.774566  431884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:10:16.774603  431884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.55 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-741077 NodeName:multinode-741077 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.55"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.55 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:10:16.774801  431884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.55
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-741077"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.55
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.55"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:10:16.774896  431884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:10:16.785691  431884 command_runner.go:130] > kubeadm
	I0731 19:10:16.785712  431884 command_runner.go:130] > kubectl
	I0731 19:10:16.785716  431884 command_runner.go:130] > kubelet
	I0731 19:10:16.785854  431884 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:10:16.785937  431884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:10:16.795573  431884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0731 19:10:16.813180  431884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:10:16.830701  431884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 19:10:16.848200  431884 ssh_runner.go:195] Run: grep 192.168.39.55	control-plane.minikube.internal$ /etc/hosts
	I0731 19:10:16.852603  431884 command_runner.go:130] > 192.168.39.55	control-plane.minikube.internal
	I0731 19:10:16.852698  431884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:10:17.015127  431884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:10:17.030406  431884 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077 for IP: 192.168.39.55
	I0731 19:10:17.030437  431884 certs.go:194] generating shared ca certs ...
	I0731 19:10:17.030457  431884 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:10:17.030637  431884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:10:17.030698  431884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:10:17.030709  431884 certs.go:256] generating profile certs ...
	I0731 19:10:17.030838  431884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/client.key
	I0731 19:10:17.030914  431884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.key.542dcf89
	I0731 19:10:17.030967  431884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.key
	I0731 19:10:17.030983  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 19:10:17.031000  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 19:10:17.031014  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 19:10:17.031029  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 19:10:17.031046  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 19:10:17.031061  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 19:10:17.031079  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 19:10:17.031097  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 19:10:17.031174  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:10:17.031216  431884 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:10:17.031229  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:10:17.031258  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:10:17.031289  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:10:17.031320  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:10:17.031374  431884 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:10:17.031409  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.031427  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.031446  431884 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem -> /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.032257  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:10:17.058380  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:10:17.085074  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:10:17.111086  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:10:17.136603  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 19:10:17.162266  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:10:17.187655  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:10:17.212982  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/multinode-741077/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 19:10:17.237431  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:10:17.264574  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:10:17.291382  431884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:10:17.317058  431884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:10:17.334491  431884 ssh_runner.go:195] Run: openssl version
	I0731 19:10:17.340566  431884 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 19:10:17.340686  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:10:17.352433  431884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.357604  431884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.357634  431884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.357686  431884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:10:17.363551  431884 command_runner.go:130] > 3ec20f2e
	I0731 19:10:17.363730  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:10:17.373748  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:10:17.385174  431884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.390267  431884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.390342  431884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.390403  431884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:10:17.396104  431884 command_runner.go:130] > b5213941
	I0731 19:10:17.396190  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:10:17.405623  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:10:17.416481  431884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.421330  431884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.421464  431884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.421513  431884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:10:17.426997  431884 command_runner.go:130] > 51391683
	I0731 19:10:17.427169  431884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:10:17.436295  431884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:10:17.440711  431884 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:10:17.440731  431884 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 19:10:17.440739  431884 command_runner.go:130] > Device: 253,1	Inode: 2103851     Links: 1
	I0731 19:10:17.440745  431884 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 19:10:17.440751  431884 command_runner.go:130] > Access: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440756  431884 command_runner.go:130] > Modify: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440760  431884 command_runner.go:130] > Change: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440767  431884 command_runner.go:130] >  Birth: 2024-07-31 19:03:17.986501917 +0000
	I0731 19:10:17.440816  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:10:17.446826  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.446929  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:10:17.452927  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.453173  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:10:17.459002  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.459064  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:10:17.464758  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.465051  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:10:17.470620  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.470690  431884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:10:17.476253  431884 command_runner.go:130] > Certificate will not expire
	I0731 19:10:17.476324  431884 kubeadm.go:392] StartCluster: {Name:multinode-741077 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-741077 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.72 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.211 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:10:17.476486  431884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:10:17.476550  431884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:10:17.513842  431884 command_runner.go:130] > d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325
	I0731 19:10:17.513872  431884 command_runner.go:130] > 1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f
	I0731 19:10:17.513878  431884 command_runner.go:130] > 3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0
	I0731 19:10:17.513885  431884 command_runner.go:130] > f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568
	I0731 19:10:17.513890  431884 command_runner.go:130] > 303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7
	I0731 19:10:17.513896  431884 command_runner.go:130] > 26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c
	I0731 19:10:17.513901  431884 command_runner.go:130] > 79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64
	I0731 19:10:17.513907  431884 command_runner.go:130] > 9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6
	I0731 19:10:17.513933  431884 cri.go:89] found id: "d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325"
	I0731 19:10:17.513941  431884 cri.go:89] found id: "1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f"
	I0731 19:10:17.513947  431884 cri.go:89] found id: "3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0"
	I0731 19:10:17.513952  431884 cri.go:89] found id: "f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568"
	I0731 19:10:17.513959  431884 cri.go:89] found id: "303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7"
	I0731 19:10:17.513964  431884 cri.go:89] found id: "26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c"
	I0731 19:10:17.513968  431884 cri.go:89] found id: "79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64"
	I0731 19:10:17.513971  431884 cri.go:89] found id: "9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6"
	I0731 19:10:17.513976  431884 cri.go:89] found id: ""
	I0731 19:10:17.514032  431884 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.377619993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453269377596468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a9cfcbc-c086-42d1-9318-21b47b11336c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.378203826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efad41eb-3d60-4547-afac-babe85ed1427 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.378263599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efad41eb-3d60-4547-afac-babe85ed1427 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.380207288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efad41eb-3d60-4547-afac-babe85ed1427 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.425599691Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c077010d-c374-4c29-a26f-8fdc35c41271 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.425690845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c077010d-c374-4c29-a26f-8fdc35c41271 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.426979973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74a8483b-e4bf-4466-9454-4e0dc8ebcf41 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.427519122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453269427494202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74a8483b-e4bf-4466-9454-4e0dc8ebcf41 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.428241760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ff9fe2b-862a-4826-9d56-697bd008f5f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.428305869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ff9fe2b-862a-4826-9d56-697bd008f5f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.428926322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ff9fe2b-862a-4826-9d56-697bd008f5f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.472267544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7205c9b5-9174-473b-bafa-c7bfd6fd86cf name=/runtime.v1.RuntimeService/Version
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.472354872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7205c9b5-9174-473b-bafa-c7bfd6fd86cf name=/runtime.v1.RuntimeService/Version
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.473795836Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e28243d7-e62e-4712-b186-3c3f63c247b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.474236332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453269474212808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e28243d7-e62e-4712-b186-3c3f63c247b9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.474874115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2dd54a4-c9f9-4a63-ae2f-b0c4c4b0c067 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.474945620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2dd54a4-c9f9-4a63-ae2f-b0c4c4b0c067 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.475448478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2dd54a4-c9f9-4a63-ae2f-b0c4c4b0c067 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.519039949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3202cd6d-9ae2-4b65-9469-29e59d5851f1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.519134793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3202cd6d-9ae2-4b65-9469-29e59d5851f1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.520809115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca05e784-086e-40a0-b10c-655a49c01a8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.521228653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453269521206213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca05e784-086e-40a0-b10c-655a49c01a8d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.521795984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdb0fdcf-0a36-427d-9220-0bdefc608f1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.521878474Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdb0fdcf-0a36-427d-9220-0bdefc608f1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:14:29 multinode-741077 crio[2898]: time="2024-07-31 19:14:29.522428100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b97fc6747afd6f47132fae502ed33e7380c7e31c5bbdc664a85f8c19d9b62754,PodSandboxId:ee0c4e8b2c216fb683068669079376733ff78d2bc7ec795e988a0d397cc7855f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722453057200403032,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80,PodSandboxId:1e517f110063878e1381246372dfd55a9a403266a11cabc9b0a3c845aa1e2862,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722453023660222196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40,PodSandboxId:60a8a80d5e08028e5fbcd56cdc4d6202f5a249c6ef67d39bf427bd72cc955a10,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722453023656465920,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\"
:\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ee0df40942bb6dd0cfb872e8e33f2df501538e4e11bfb579963ca684afbb5d5,PodSandboxId:5c12dae1148ecf7a68d36f223a2ac8ab9c5eddfa0277c6e14d1393478209eeed,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453023511060522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Ann
otations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac,PodSandboxId:7b7e1081a8b00d0a62a094e4ac3c431cfa6ece9737ef41b59a65aa56e55d5901,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722453023421597430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kub
ernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38,PodSandboxId:a86a91c1c20a91281151baddd81d94508d5c03c85b877db1881d003a2e05e34b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722453019676538291,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f,PodSandboxId:16a58c4a2fc71ff27ca1b7707a2fe5154bc54dc2f0f427e0bb2b9973ac99210f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722453019645533417,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918
f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d,PodSandboxId:41e31c1dc8ab64621c83c3e8c304f8060e34829e82e908689cd795cb362b8eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722453019614166133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map[string]string{io.kubernetes.container.hash: 83a51867,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6,PodSandboxId:a902e23540ac93e2abf1db8365813a73b7b962fefed1cf0035c1e431ae2e0265,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722453019557628569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44e22ae723b81ce13648fc5e6f0e9bd6644b746c345ea85da321c9cc7fae364,PodSandboxId:ff29ec0020d86aa7c9f17ede17aa56de2836dab8c9883eb17358fceac8c2d45a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722452696766925238,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-99dqx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c4427c4a-ddce-46cd-9a6d-340840c8704f,},Annotations:map[string]string{io.kubernetes.container.hash: 1c722d6,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325,PodSandboxId:3f92efbc1c57fc5fc9d49a0d0b0827f7a8efd416db3b51909fc039b60748ab1d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722452637488253387,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wj8lb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5af0cad6-0a64-45f3-91bd-b98cc3b74609,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1ef5b0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"container
Port\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1816d14ba8056f8457ba5e52e4eacd8daf92b528dde97764be90ebf50e13638f,PodSandboxId:faf07930b3c5c9f253b9eb224b6041b17a5c9313159ce0a104321c3725be19e7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722452637434286749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: fa39cd40-fd74-4448-b66f-b88f8730194c,},Annotations:map[string]string{io.kubernetes.container.hash: e53c34d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0,PodSandboxId:4576cd98ade56296eb34dd483a027c64b8f7442db52571a56f5a9442dafd12e7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722452625822879221,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qbk6,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 51cf5405-60a0-4f19-a850-ae06b9da9835,},Annotations:map[string]string{io.kubernetes.container.hash: d9d5de5f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568,PodSandboxId:1e9ebb4a10d8779fbe6915267b4ace8b69b7e2b12cca736031808dd61be971e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722452621856352717,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mw9ls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 62387ff7-fdfc-42c3-b320-dd0e23eb2d96,},Annotations:map[string]string{io.kubernetes.container.hash: b356cdc4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7,PodSandboxId:6f173fa22537a2e29bdcd5bf1d360c1217d79168a454ce068acecec23d983229,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722452602315157224,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3230c1158ed993056ede747c90c6879
,},Annotations:map[string]string{io.kubernetes.container.hash: f10448d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c,PodSandboxId:fd0585a2250b4a18344570b648cfe33bb06f6bfbe975d5f84c2dd19a50dea639,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722452602311182215,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70952dfea3e98d884ab005e06ac0626b,},Annotations:
map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64,PodSandboxId:78afbb02bd316ff367072f13dae9e16c7afde97084c6efc1b91e38f56f59974b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722452602292086883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84bc2c769ebbb8def5cd86577a05eead,},
Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6,PodSandboxId:37559affd3c8cea7f9a05390603149d1353c255e4994155142307b71196d287f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722452602273999230,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-741077,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e710e9f74e1775e267a655690682250,},Annotations:map
[string]string{io.kubernetes.container.hash: 83a51867,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdb0fdcf-0a36-427d-9220-0bdefc608f1e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b97fc6747afd6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   ee0c4e8b2c216       busybox-fc5497c4f-99dqx
	45a83b75865f6       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   1e517f1100638       kindnet-4qbk6
	907d6524f1580       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   60a8a80d5e080       coredns-7db6d8ff4d-wj8lb
	2ee0df40942bb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   5c12dae1148ec       storage-provisioner
	6ef6dce2926c4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   7b7e1081a8b00       kube-proxy-mw9ls
	15e863e977792       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   a86a91c1c20a9       etcd-multinode-741077
	cb176e309b9f7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   16a58c4a2fc71       kube-controller-manager-multinode-741077
	0676a8d3c1f6b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   41e31c1dc8ab6       kube-apiserver-multinode-741077
	89261b71d79a8       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   a902e23540ac9       kube-scheduler-multinode-741077
	e44e22ae723b8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   ff29ec0020d86       busybox-fc5497c4f-99dqx
	d55f850c96623       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   3f92efbc1c57f       coredns-7db6d8ff4d-wj8lb
	1816d14ba8056       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   faf07930b3c5c       storage-provisioner
	3c3762006378b       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   4576cd98ade56       kindnet-4qbk6
	f64d369909629       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   1e9ebb4a10d87       kube-proxy-mw9ls
	303524292a3a6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   6f173fa22537a       etcd-multinode-741077
	26b294b731f07       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   fd0585a2250b4       kube-scheduler-multinode-741077
	79cdedb3c18fb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   78afbb02bd316       kube-controller-manager-multinode-741077
	9c1b1bd427bf0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   37559affd3c8c       kube-apiserver-multinode-741077
	
	
	==> coredns [907d6524f1580722d2ac0fe58b6a89a89572b75b2ac83df21bdd8cdcda26ca40] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50804 - 30749 "HINFO IN 6117311924496023715.820196164584186632. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020821961s
	
	
	==> coredns [d55f850c966235f559444b2b5ffe7f96fc842228100ddf630766042d0f50d325] <==
	[INFO] 10.244.1.2:54822 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00181687s
	[INFO] 10.244.1.2:34109 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107656s
	[INFO] 10.244.1.2:50943 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00009601s
	[INFO] 10.244.1.2:51483 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001313488s
	[INFO] 10.244.1.2:54761 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006026s
	[INFO] 10.244.1.2:51072 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005785s
	[INFO] 10.244.1.2:45407 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074484s
	[INFO] 10.244.0.3:46460 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010851s
	[INFO] 10.244.0.3:57302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065688s
	[INFO] 10.244.0.3:45034 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076394s
	[INFO] 10.244.0.3:35377 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00005713s
	[INFO] 10.244.1.2:56065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129371s
	[INFO] 10.244.1.2:38412 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091324s
	[INFO] 10.244.1.2:59251 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000097258s
	[INFO] 10.244.1.2:46978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000154639s
	[INFO] 10.244.0.3:54413 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000079161s
	[INFO] 10.244.0.3:52198 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000221185s
	[INFO] 10.244.0.3:48871 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000077338s
	[INFO] 10.244.0.3:50362 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081957s
	[INFO] 10.244.1.2:42436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148372s
	[INFO] 10.244.1.2:34597 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011086s
	[INFO] 10.244.1.2:35776 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085166s
	[INFO] 10.244.1.2:49192 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000086725s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-741077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-741077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=multinode-741077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_03_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:03:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741077
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:14:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:10:22 +0000   Wed, 31 Jul 2024 19:03:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    multinode-741077
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7740aefdd974dac90305bd0c46ded41
	  System UUID:                c7740aef-dd97-4dac-9030-5bd0c46ded41
	  Boot ID:                    3ef53520-ca80-4f5e-bd45-a49390b976a5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-99dqx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 coredns-7db6d8ff4d-wj8lb                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-741077                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-4qbk6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-741077             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-741077    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-mw9ls                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-741077             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node multinode-741077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node multinode-741077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node multinode-741077 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-741077 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-741077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-741077 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-741077 event: Registered Node multinode-741077 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-741077 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m11s)  kubelet          Node multinode-741077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m11s)  kubelet          Node multinode-741077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m11s)  kubelet          Node multinode-741077 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m54s                  node-controller  Node multinode-741077 event: Registered Node multinode-741077 in Controller
	
	
	Name:               multinode-741077-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-741077-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=multinode-741077
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T19_11_03_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:11:03 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-741077-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:12:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:12:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:12:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:12:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 19:11:34 +0000   Wed, 31 Jul 2024 19:12:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    multinode-741077-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8b7b3347bb304251b0635448ba8b2c2e
	  System UUID:                8b7b3347-bb30-4251-b063-5448ba8b2c2e
	  Boot ID:                    aaad1be5-26c6-4e5e-9f47-abd28880ee40
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xstbr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kindnet-zjjn6              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m59s
	  kube-system                 kube-proxy-k775h           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m59s (x2 over 9m59s)  kubelet          Node multinode-741077-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m59s (x2 over 9m59s)  kubelet          Node multinode-741077-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m59s (x2 over 9m59s)  kubelet          Node multinode-741077-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-741077-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-741077-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-741077-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-741077-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-741077-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-741077-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.065696] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.186152] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.123682] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.274137] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.289633] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.060117] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.135375] systemd-fstab-generator[947]: Ignoring "noauto" option for root device
	[  +1.943447] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.099838] systemd-fstab-generator[1283]: Ignoring "noauto" option for root device
	[  +0.076897] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.638860] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +0.112431] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.458865] kauditd_printk_skb: 56 callbacks suppressed
	[Jul31 19:04] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 19:10] systemd-fstab-generator[2817]: Ignoring "noauto" option for root device
	[  +0.158822] systemd-fstab-generator[2829]: Ignoring "noauto" option for root device
	[  +0.174139] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.145003] systemd-fstab-generator[2855]: Ignoring "noauto" option for root device
	[  +0.282503] systemd-fstab-generator[2883]: Ignoring "noauto" option for root device
	[  +0.737511] systemd-fstab-generator[2981]: Ignoring "noauto" option for root device
	[  +1.746352] systemd-fstab-generator[3104]: Ignoring "noauto" option for root device
	[  +4.681706] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.819370] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.948175] systemd-fstab-generator[3943]: Ignoring "noauto" option for root device
	[ +18.041026] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [15e863e977792f70ac5bc97bfec7d2f8d96597b7cf521e038f735881e248af38] <==
	{"level":"info","ts":"2024-07-31T19:10:20.060998Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T19:10:20.061008Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T19:10:20.061273Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 switched to configuration voters=(17042293819748820353)"}
	{"level":"info","ts":"2024-07-31T19:10:20.061343Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6efde86ab6af376b","local-member-id":"ec8263ef63f6a581","added-peer-id":"ec8263ef63f6a581","added-peer-peer-urls":["https://192.168.39.55:2380"]}
	{"level":"info","ts":"2024-07-31T19:10:20.061577Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6efde86ab6af376b","local-member-id":"ec8263ef63f6a581","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:10:20.061619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:10:20.070882Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T19:10:20.075624Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:10:20.075662Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:10:20.076482Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ec8263ef63f6a581","initial-advertise-peer-urls":["https://192.168.39.55:2380"],"listen-peer-urls":["https://192.168.39.55:2380"],"advertise-client-urls":["https://192.168.39.55:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.55:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:10:20.076537Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:10:21.220421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T19:10:21.220551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:10:21.22061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 received MsgPreVoteResp from ec8263ef63f6a581 at term 2"}
	{"level":"info","ts":"2024-07-31T19:10:21.22064Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.220665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 received MsgVoteResp from ec8263ef63f6a581 at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.220698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec8263ef63f6a581 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.220728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec8263ef63f6a581 elected leader ec8263ef63f6a581 at term 3"}
	{"level":"info","ts":"2024-07-31T19:10:21.227172Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:10:21.227122Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ec8263ef63f6a581","local-member-attributes":"{Name:multinode-741077 ClientURLs:[https://192.168.39.55:2379]}","request-path":"/0/members/ec8263ef63f6a581/attributes","cluster-id":"6efde86ab6af376b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:10:21.228541Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:10:21.229626Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.55:2379"}
	{"level":"info","ts":"2024-07-31T19:10:21.230458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:10:21.230507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:10:21.230629Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [303524292a3a6121a5e1d2cd371f99364144759601b400a537c2fa902c0b61e7] <==
	{"level":"info","ts":"2024-07-31T19:05:30.706837Z","caller":"traceutil/trace.go:171","msg":"trace[27633436] transaction","detail":"{read_only:false; response_revision:662; number_of_response:1; }","duration":"192.54722ms","start":"2024-07-31T19:05:30.514271Z","end":"2024-07-31T19:05:30.706818Z","steps":["trace[27633436] 'process raft request'  (duration: 192.480323ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:05:30.707149Z","caller":"traceutil/trace.go:171","msg":"trace[1399407284] transaction","detail":"{read_only:false; response_revision:661; number_of_response:1; }","duration":"279.550349ms","start":"2024-07-31T19:05:30.427584Z","end":"2024-07-31T19:05:30.707134Z","steps":["trace[1399407284] 'process raft request'  (duration: 235.631983ms)","trace[1399407284] 'compare'  (duration: 43.397317ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:05:30.707409Z","caller":"traceutil/trace.go:171","msg":"trace[1121990176] linearizableReadLoop","detail":"{readStateIndex:709; appliedIndex:708; }","duration":"267.794174ms","start":"2024-07-31T19:05:30.439553Z","end":"2024-07-31T19:05:30.707348Z","steps":["trace[1121990176] 'read index received'  (duration: 73.956225ms)","trace[1121990176] 'applied index is now lower than readState.Index'  (duration: 193.836977ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:05:30.707555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.98636ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-741077-m03\" ","response":"range_response_count:1 size:3023"}
	{"level":"info","ts":"2024-07-31T19:05:30.707618Z","caller":"traceutil/trace.go:171","msg":"trace[220000816] range","detail":"{range_begin:/registry/minions/multinode-741077-m03; range_end:; response_count:1; response_revision:662; }","duration":"268.078758ms","start":"2024-07-31T19:05:30.439529Z","end":"2024-07-31T19:05:30.707607Z","steps":["trace[220000816] 'agreement among raft nodes before linearized reading'  (duration: 267.982018ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.707566Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.21095ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-31T19:05:30.707808Z","caller":"traceutil/trace.go:171","msg":"trace[727598803] range","detail":"{range_begin:/registry/serviceaccounts/; range_end:/registry/serviceaccounts0; response_count:0; response_revision:662; }","duration":"187.470036ms","start":"2024-07-31T19:05:30.520328Z","end":"2024-07-31T19:05:30.707798Z","steps":["trace[727598803] 'agreement among raft nodes before linearized reading'  (duration: 187.194283ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:05:30.93274Z","caller":"traceutil/trace.go:171","msg":"trace[1510250675] linearizableReadLoop","detail":"{readStateIndex:711; appliedIndex:710; }","duration":"175.711481ms","start":"2024-07-31T19:05:30.757013Z","end":"2024-07-31T19:05:30.932725Z","steps":["trace[1510250675] 'read index received'  (duration: 170.76679ms)","trace[1510250675] 'applied index is now lower than readState.Index'  (duration: 4.944206ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T19:05:30.932827Z","caller":"traceutil/trace.go:171","msg":"trace[224791766] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"214.585064ms","start":"2024-07-31T19:05:30.718236Z","end":"2024-07-31T19:05:30.932822Z","steps":["trace[224791766] 'process raft request'  (duration: 209.586039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.933013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.337542ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2024-07-31T19:05:30.93421Z","caller":"traceutil/trace.go:171","msg":"trace[1794451727] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:663; }","duration":"146.559138ms","start":"2024-07-31T19:05:30.787639Z","end":"2024-07-31T19:05:30.934198Z","steps":["trace[1794451727] 'agreement among raft nodes before linearized reading'  (duration: 145.310999ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.933118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.097978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-31T19:05:30.934522Z","caller":"traceutil/trace.go:171","msg":"trace[1781365690] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:663; }","duration":"177.530369ms","start":"2024-07-31T19:05:30.756982Z","end":"2024-07-31T19:05:30.934513Z","steps":["trace[1781365690] 'agreement among raft nodes before linearized reading'  (duration: 176.108693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:05:30.933211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.966414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-31T19:05:30.934708Z","caller":"traceutil/trace.go:171","msg":"trace[1123417128] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:663; }","duration":"146.475337ms","start":"2024-07-31T19:05:30.788226Z","end":"2024-07-31T19:05:30.934701Z","steps":["trace[1123417128] 'agreement among raft nodes before linearized reading'  (duration: 144.906877ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:08:44.01447Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T19:08:44.014618Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-741077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.55:2380"],"advertise-client-urls":["https://192.168.39.55:2379"]}
	{"level":"warn","ts":"2024-07-31T19:08:44.014823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:08:44.014942Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:08:44.10401Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.55:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:08:44.104256Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.55:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:08:44.104491Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ec8263ef63f6a581","current-leader-member-id":"ec8263ef63f6a581"}
	{"level":"info","ts":"2024-07-31T19:08:44.107404Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:08:44.107587Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.55:2380"}
	{"level":"info","ts":"2024-07-31T19:08:44.107621Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-741077","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.55:2380"],"advertise-client-urls":["https://192.168.39.55:2379"]}
	
	
	==> kernel <==
	 19:14:30 up 11 min,  0 users,  load average: 0.25, 0.30, 0.18
	Linux multinode-741077 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3c3762006378b63983915f0c110bc8bcfbe8222411370b784c954b0a8984d1c0] <==
	I0731 19:07:56.880785       1 main.go:299] handling current node
	I0731 19:08:06.884228       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:06.884422       1 main.go:299] handling current node
	I0731 19:08:06.884478       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:06.884489       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:08:06.884764       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:06.884797       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:16.876231       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:16.876342       1 main.go:299] handling current node
	I0731 19:08:16.876440       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:16.876477       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:08:16.876653       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:16.876678       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:26.877068       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:26.877183       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:26.877338       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:26.877480       1 main.go:299] handling current node
	I0731 19:08:26.877519       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:26.877585       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:08:36.884397       1 main.go:295] Handling node with IPs: map[192.168.39.211:{}]
	I0731 19:08:36.884453       1 main.go:322] Node multinode-741077-m03 has CIDR [10.244.3.0/24] 
	I0731 19:08:36.884615       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:08:36.884642       1 main.go:299] handling current node
	I0731 19:08:36.884654       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:08:36.884658       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [45a83b75865f61ad5ab28e8dd684545403c21742fa0ac991f355d60ab681dc80] <==
	I0731 19:13:24.763154       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:13:34.763977       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:13:34.764089       1 main.go:299] handling current node
	I0731 19:13:34.764124       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:13:34.764143       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:13:44.763823       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:13:44.763874       1 main.go:299] handling current node
	I0731 19:13:44.763901       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:13:44.763907       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:13:54.763843       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:13:54.764039       1 main.go:299] handling current node
	I0731 19:13:54.764088       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:13:54.764107       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:14:04.765786       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:14:04.765929       1 main.go:299] handling current node
	I0731 19:14:04.765982       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:14:04.765992       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:14:14.764107       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:14:14.764209       1 main.go:299] handling current node
	I0731 19:14:14.764264       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:14:14.764286       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	I0731 19:14:24.764301       1 main.go:295] Handling node with IPs: map[192.168.39.55:{}]
	I0731 19:14:24.764463       1 main.go:299] handling current node
	I0731 19:14:24.764526       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 19:14:24.764551       1 main.go:322] Node multinode-741077-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0676a8d3c1f6b0efc8210e27818f754b129da9f205d061c75c152a8f425bfa8d] <==
	I0731 19:10:22.540944       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0731 19:10:22.599070       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 19:10:22.604999       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:10:22.625124       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:10:22.625736       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:10:22.625795       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:10:22.626478       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 19:10:22.626520       1 policy_source.go:224] refreshing policies
	I0731 19:10:22.639290       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 19:10:22.640963       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 19:10:22.641046       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 19:10:22.645987       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:10:22.647009       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:10:22.647106       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:10:22.647131       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:10:22.649733       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:10:22.686615       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:10:23.517652       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:10:24.906232       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:10:25.031529       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 19:10:25.046463       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 19:10:25.119666       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:10:25.129874       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:10:34.986860       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:10:35.123886       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [9c1b1bd427bf0777effb4307730ba994044842fd2c3a527852660323060159e6] <==
	W0731 19:08:44.032930       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.032977       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033003       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033031       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033063       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033086       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033114       1 logging.go:59] [core] [Channel #5 SubChannel #7] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033139       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033166       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033202       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033232       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033261       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033275       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033289       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033319       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033321       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033353       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033486       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033526       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033557       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033607       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033637       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.034168       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.034625       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:08:44.033353       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [79cdedb3c18fb839f989b484c68ff9997e0163edab3a8b5802b77e10844c9e64] <==
	I0731 19:04:30.440108       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m02" podCIDRs=["10.244.1.0/24"]
	I0731 19:04:35.372828       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-741077-m02"
	I0731 19:04:51.032176       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:04:53.511240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.882579ms"
	I0731 19:04:53.529167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.480211ms"
	I0731 19:04:53.529264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.707µs"
	I0731 19:04:53.539113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.976µs"
	I0731 19:04:53.561080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.133µs"
	I0731 19:04:56.972186       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.596211ms"
	I0731 19:04:56.972476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.237µs"
	I0731 19:04:57.447215       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.94724ms"
	I0731 19:04:57.447545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45µs"
	I0731 19:05:28.954832       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m03\" does not exist"
	I0731 19:05:28.955005       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:05:28.998956       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m03" podCIDRs=["10.244.2.0/24"]
	I0731 19:05:30.396921       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-741077-m03"
	I0731 19:05:49.092352       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:06:17.876323       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:06:18.993776       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:06:18.995993       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m03\" does not exist"
	I0731 19:06:19.012921       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m03" podCIDRs=["10.244.3.0/24"]
	I0731 19:06:38.145827       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:07:15.461493       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m03"
	I0731 19:07:15.504484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.934679ms"
	I0731 19:07:15.505955       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.911µs"
	
	
	==> kube-controller-manager [cb176e309b9f791ad666df926a4f0afa1edaa371b044f62b1c2836f17eec639f] <==
	I0731 19:11:03.583427       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m02" podCIDRs=["10.244.1.0/24"]
	I0731 19:11:05.461902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="136.055µs"
	I0731 19:11:05.471885       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.299µs"
	I0731 19:11:05.484246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="53.976µs"
	I0731 19:11:05.524195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.616µs"
	I0731 19:11:05.530794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.294µs"
	I0731 19:11:05.539044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="136.705µs"
	I0731 19:11:05.719713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.434µs"
	I0731 19:11:23.356818       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:11:23.377886       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.917µs"
	I0731 19:11:23.393258       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.843µs"
	I0731 19:11:26.880914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.753401ms"
	I0731 19:11:26.881166       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.146µs"
	I0731 19:11:41.891146       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:11:42.782534       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-741077-m03\" does not exist"
	I0731 19:11:42.783208       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:11:42.805088       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-741077-m03" podCIDRs=["10.244.2.0/24"]
	I0731 19:12:02.687214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m03"
	I0731 19:12:08.229261       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-741077-m02"
	I0731 19:12:45.167487       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.177683ms"
	I0731 19:12:45.168595       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.542µs"
	I0731 19:12:55.019763       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ml2nd"
	I0731 19:12:55.046483       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ml2nd"
	I0731 19:12:55.046578       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nrftq"
	I0731 19:12:55.073292       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-nrftq"
	
	
	==> kube-proxy [6ef6dce2926c4c1daa26de3b29728ee9b273380b99929f81592c0e8ffdab3aac] <==
	I0731 19:10:23.822904       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:10:23.844966       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	I0731 19:10:23.922542       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:10:23.922598       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:10:23.922618       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:10:23.931758       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:10:23.932122       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:10:23.932317       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:10:23.936744       1 config.go:192] "Starting service config controller"
	I0731 19:10:23.936783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:10:23.936811       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:10:23.936815       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:10:23.936852       1 config.go:319] "Starting node config controller"
	I0731 19:10:23.936873       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:10:24.037336       1 shared_informer.go:320] Caches are synced for node config
	I0731 19:10:24.037428       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:10:24.037475       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f64d36990962949cd7257220c489691e8633224207a14f64e5f3cfb4e51a7568] <==
	I0731 19:03:42.161920       1 server_linux.go:69] "Using iptables proxy"
	I0731 19:03:42.177302       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.55"]
	I0731 19:03:42.222755       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:03:42.222851       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:03:42.222883       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:03:42.226135       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:03:42.226486       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:03:42.226535       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:03:42.228930       1 config.go:192] "Starting service config controller"
	I0731 19:03:42.229193       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:03:42.229249       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:03:42.229267       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:03:42.230128       1 config.go:319] "Starting node config controller"
	I0731 19:03:42.230680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:03:42.329670       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:03:42.329752       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:03:42.331254       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [26b294b731f0767ef21674f867a63de6cdb67a0d0a70a47d3d10c618cc85e48c] <==
	E0731 19:03:24.768477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:03:24.768483       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:03:24.768489       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 19:03:24.768598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:03:24.768605       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 19:03:25.594309       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 19:03:25.594346       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 19:03:25.685356       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 19:03:25.685484       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 19:03:25.713940       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 19:03:25.714157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 19:03:25.741088       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 19:03:25.741203       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 19:03:25.914774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 19:03:25.914893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 19:03:25.948869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 19:03:25.948951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 19:03:25.951357       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 19:03:25.951481       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:03:25.962460       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 19:03:25.962530       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 19:03:26.073434       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 19:03:26.073612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0731 19:03:28.465458       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:08:44.016200       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [89261b71d79a8889d5bc138a3dcc5905cfb28147765080241b52cd5804374ab6] <==
	I0731 19:10:20.698077       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:10:22.557152       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:10:22.557267       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:10:22.557278       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:10:22.557284       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:10:22.646622       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:10:22.646662       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:10:22.648219       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:10:22.648340       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:10:22.662468       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:10:22.652978       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:10:22.763271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957799    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62387ff7-fdfc-42c3-b320-dd0e23eb2d96-lib-modules\") pod \"kube-proxy-mw9ls\" (UID: \"62387ff7-fdfc-42c3-b320-dd0e23eb2d96\") " pod="kube-system/kube-proxy-mw9ls"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957822    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51cf5405-60a0-4f19-a850-ae06b9da9835-lib-modules\") pod \"kindnet-4qbk6\" (UID: \"51cf5405-60a0-4f19-a850-ae06b9da9835\") " pod="kube-system/kindnet-4qbk6"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957835    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fa39cd40-fd74-4448-b66f-b88f8730194c-tmp\") pod \"storage-provisioner\" (UID: \"fa39cd40-fd74-4448-b66f-b88f8730194c\") " pod="kube-system/storage-provisioner"
	Jul 31 19:10:22 multinode-741077 kubelet[3111]: I0731 19:10:22.957870    3111 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/51cf5405-60a0-4f19-a850-ae06b9da9835-cni-cfg\") pod \"kindnet-4qbk6\" (UID: \"51cf5405-60a0-4f19-a850-ae06b9da9835\") " pod="kube-system/kindnet-4qbk6"
	Jul 31 19:10:30 multinode-741077 kubelet[3111]: I0731 19:10:30.518422    3111 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 19:11:18 multinode-741077 kubelet[3111]: E0731 19:11:18.976357    3111 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:11:18 multinode-741077 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:12:18 multinode-741077 kubelet[3111]: E0731 19:12:18.983743    3111 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:12:18 multinode-741077 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:12:18 multinode-741077 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:12:18 multinode-741077 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:12:18 multinode-741077 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:13:18 multinode-741077 kubelet[3111]: E0731 19:13:18.977506    3111 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:13:18 multinode-741077 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:13:18 multinode-741077 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:13:18 multinode-741077 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:13:18 multinode-741077 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:14:18 multinode-741077 kubelet[3111]: E0731 19:14:18.977607    3111 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:14:18 multinode-741077 kubelet[3111]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:14:18 multinode-741077 kubelet[3111]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:14:18 multinode-741077 kubelet[3111]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:14:18 multinode-741077 kubelet[3111]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 19:14:29.082582  433842 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19356-395032/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-741077 -n multinode-741077
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-741077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.35s)

                                                
                                    
x
+
TestPreload (249.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-392764 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 19:18:48.018269  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 19:20:15.793518  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 19:20:32.746280  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-392764 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m45.984625867s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-392764 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-392764 image pull gcr.io/k8s-minikube/busybox: (2.712756478s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-392764
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-392764: (7.294385018s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-392764 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-392764 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.664036793s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-392764 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-31 19:22:29.519704139 +0000 UTC m=+4056.339217090
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-392764 -n test-preload-392764
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-392764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-392764 logs -n 25: (1.074520105s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077 sudo cat                                       | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m03_multinode-741077.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt                       | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m02:/home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n                                                                 | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | multinode-741077-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-741077 ssh -n multinode-741077-m02 sudo cat                                   | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-741077 node stop m03                                                          | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:05 UTC | 31 Jul 24 19:06 UTC |
	| node    | multinode-741077 node start                                                             | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC | 31 Jul 24 19:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC |                     |
	| stop    | -p multinode-741077                                                                     | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:06 UTC |                     |
	| start   | -p multinode-741077                                                                     | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:08 UTC | 31 Jul 24 19:12 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC |                     |
	| node    | multinode-741077 node delete                                                            | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC | 31 Jul 24 19:12 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-741077 stop                                                                   | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:12 UTC |                     |
	| start   | -p multinode-741077                                                                     | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:14 UTC | 31 Jul 24 19:17 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-741077                                                                | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:17 UTC |                     |
	| start   | -p multinode-741077-m02                                                                 | multinode-741077-m02 | jenkins | v1.33.1 | 31 Jul 24 19:17 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-741077-m03                                                                 | multinode-741077-m03 | jenkins | v1.33.1 | 31 Jul 24 19:17 UTC | 31 Jul 24 19:18 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-741077                                                                 | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:18 UTC |                     |
	| delete  | -p multinode-741077-m03                                                                 | multinode-741077-m03 | jenkins | v1.33.1 | 31 Jul 24 19:18 UTC | 31 Jul 24 19:18 UTC |
	| delete  | -p multinode-741077                                                                     | multinode-741077     | jenkins | v1.33.1 | 31 Jul 24 19:18 UTC | 31 Jul 24 19:18 UTC |
	| start   | -p test-preload-392764                                                                  | test-preload-392764  | jenkins | v1.33.1 | 31 Jul 24 19:18 UTC | 31 Jul 24 19:21 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-392764 image pull                                                          | test-preload-392764  | jenkins | v1.33.1 | 31 Jul 24 19:21 UTC | 31 Jul 24 19:21 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-392764                                                                  | test-preload-392764  | jenkins | v1.33.1 | 31 Jul 24 19:21 UTC | 31 Jul 24 19:21 UTC |
	| start   | -p test-preload-392764                                                                  | test-preload-392764  | jenkins | v1.33.1 | 31 Jul 24 19:21 UTC | 31 Jul 24 19:22 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-392764 image list                                                          | test-preload-392764  | jenkins | v1.33.1 | 31 Jul 24 19:22 UTC | 31 Jul 24 19:22 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:21:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:21:18.676792  436895 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:21:18.676946  436895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:21:18.676956  436895 out.go:304] Setting ErrFile to fd 2...
	I0731 19:21:18.676963  436895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:21:18.677149  436895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:21:18.677717  436895 out.go:298] Setting JSON to false
	I0731 19:21:18.678756  436895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11022,"bootTime":1722442657,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:21:18.678821  436895 start.go:139] virtualization: kvm guest
	I0731 19:21:18.681178  436895 out.go:177] * [test-preload-392764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:21:18.682902  436895 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:21:18.682948  436895 notify.go:220] Checking for updates...
	I0731 19:21:18.685656  436895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:21:18.686992  436895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:21:18.688300  436895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:21:18.689737  436895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:21:18.691299  436895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:21:18.693090  436895 config.go:182] Loaded profile config "test-preload-392764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 19:21:18.693543  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:21:18.693623  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:21:18.708906  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34329
	I0731 19:21:18.709344  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:21:18.709880  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:21:18.709902  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:21:18.710279  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:21:18.710460  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:18.712181  436895 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 19:21:18.713292  436895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:21:18.713590  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:21:18.713633  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:21:18.728533  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40265
	I0731 19:21:18.729018  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:21:18.729546  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:21:18.729569  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:21:18.729879  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:21:18.730105  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:18.765987  436895 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:21:18.767293  436895 start.go:297] selected driver: kvm2
	I0731 19:21:18.767307  436895 start.go:901] validating driver "kvm2" against &{Name:test-preload-392764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-392764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:21:18.767427  436895 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:21:18.768140  436895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:21:18.768219  436895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:21:18.783752  436895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:21:18.784086  436895 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:21:18.784115  436895 cni.go:84] Creating CNI manager for ""
	I0731 19:21:18.784123  436895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:21:18.784177  436895 start.go:340] cluster config:
	{Name:test-preload-392764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-392764 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:21:18.784278  436895 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:21:18.785936  436895 out.go:177] * Starting "test-preload-392764" primary control-plane node in "test-preload-392764" cluster
	I0731 19:21:18.787103  436895 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 19:21:18.899048  436895 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 19:21:18.899091  436895 cache.go:56] Caching tarball of preloaded images
	I0731 19:21:18.899297  436895 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 19:21:18.901190  436895 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0731 19:21:18.902352  436895 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:21:19.017391  436895 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 19:21:31.386685  436895 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:21:31.386835  436895 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 19:21:32.249621  436895 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0731 19:21:32.249782  436895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/config.json ...
	I0731 19:21:32.250050  436895 start.go:360] acquireMachinesLock for test-preload-392764: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:21:32.250138  436895 start.go:364] duration metric: took 58.146µs to acquireMachinesLock for "test-preload-392764"
	I0731 19:21:32.250161  436895 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:21:32.250173  436895 fix.go:54] fixHost starting: 
	I0731 19:21:32.250514  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:21:32.250559  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:21:32.265369  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I0731 19:21:32.265785  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:21:32.266370  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:21:32.266403  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:21:32.266726  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:21:32.266933  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:32.267099  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetState
	I0731 19:21:32.268656  436895 fix.go:112] recreateIfNeeded on test-preload-392764: state=Stopped err=<nil>
	I0731 19:21:32.268683  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	W0731 19:21:32.268850  436895 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:21:32.271729  436895 out.go:177] * Restarting existing kvm2 VM for "test-preload-392764" ...
	I0731 19:21:32.272937  436895 main.go:141] libmachine: (test-preload-392764) Calling .Start
	I0731 19:21:32.273123  436895 main.go:141] libmachine: (test-preload-392764) Ensuring networks are active...
	I0731 19:21:32.273951  436895 main.go:141] libmachine: (test-preload-392764) Ensuring network default is active
	I0731 19:21:32.274205  436895 main.go:141] libmachine: (test-preload-392764) Ensuring network mk-test-preload-392764 is active
	I0731 19:21:32.274483  436895 main.go:141] libmachine: (test-preload-392764) Getting domain xml...
	I0731 19:21:32.275174  436895 main.go:141] libmachine: (test-preload-392764) Creating domain...
	I0731 19:21:33.474647  436895 main.go:141] libmachine: (test-preload-392764) Waiting to get IP...
	I0731 19:21:33.475442  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:33.475832  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:33.475914  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:33.475811  436978 retry.go:31] will retry after 304.923439ms: waiting for machine to come up
	I0731 19:21:33.782343  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:33.782725  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:33.782752  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:33.782692  436978 retry.go:31] will retry after 348.251427ms: waiting for machine to come up
	I0731 19:21:34.132288  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:34.132683  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:34.132708  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:34.132618  436978 retry.go:31] will retry after 395.755292ms: waiting for machine to come up
	I0731 19:21:34.530329  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:34.530764  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:34.530798  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:34.530732  436978 retry.go:31] will retry after 418.449921ms: waiting for machine to come up
	I0731 19:21:34.950280  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:34.950699  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:34.950724  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:34.950638  436978 retry.go:31] will retry after 473.532057ms: waiting for machine to come up
	I0731 19:21:35.425503  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:35.425921  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:35.425949  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:35.425865  436978 retry.go:31] will retry after 694.175754ms: waiting for machine to come up
	I0731 19:21:36.121743  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:36.122109  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:36.122141  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:36.122075  436978 retry.go:31] will retry after 1.136959139s: waiting for machine to come up
	I0731 19:21:37.260140  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:37.260652  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:37.260684  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:37.260591  436978 retry.go:31] will retry after 1.487082951s: waiting for machine to come up
	I0731 19:21:38.749445  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:38.749889  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:38.749915  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:38.749852  436978 retry.go:31] will retry after 1.493501492s: waiting for machine to come up
	I0731 19:21:40.245527  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:40.246071  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:40.246103  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:40.246024  436978 retry.go:31] will retry after 1.975510135s: waiting for machine to come up
	I0731 19:21:42.224193  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:42.224669  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:42.224690  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:42.224629  436978 retry.go:31] will retry after 2.647365634s: waiting for machine to come up
	I0731 19:21:44.874892  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:44.875211  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:44.875242  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:44.875159  436978 retry.go:31] will retry after 2.683396511s: waiting for machine to come up
	I0731 19:21:47.560470  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:47.560838  436895 main.go:141] libmachine: (test-preload-392764) DBG | unable to find current IP address of domain test-preload-392764 in network mk-test-preload-392764
	I0731 19:21:47.560874  436895 main.go:141] libmachine: (test-preload-392764) DBG | I0731 19:21:47.560792  436978 retry.go:31] will retry after 3.199565622s: waiting for machine to come up
	I0731 19:21:50.761635  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.762069  436895 main.go:141] libmachine: (test-preload-392764) Found IP for machine: 192.168.39.166
	I0731 19:21:50.762095  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has current primary IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.762114  436895 main.go:141] libmachine: (test-preload-392764) Reserving static IP address...
	I0731 19:21:50.762426  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "test-preload-392764", mac: "52:54:00:41:84:f9", ip: "192.168.39.166"} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:50.762444  436895 main.go:141] libmachine: (test-preload-392764) DBG | skip adding static IP to network mk-test-preload-392764 - found existing host DHCP lease matching {name: "test-preload-392764", mac: "52:54:00:41:84:f9", ip: "192.168.39.166"}
	I0731 19:21:50.762482  436895 main.go:141] libmachine: (test-preload-392764) Reserved static IP address: 192.168.39.166
	I0731 19:21:50.762508  436895 main.go:141] libmachine: (test-preload-392764) Waiting for SSH to be available...
	I0731 19:21:50.762517  436895 main.go:141] libmachine: (test-preload-392764) DBG | Getting to WaitForSSH function...
	I0731 19:21:50.764728  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.765112  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:50.765131  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.765238  436895 main.go:141] libmachine: (test-preload-392764) DBG | Using SSH client type: external
	I0731 19:21:50.765252  436895 main.go:141] libmachine: (test-preload-392764) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa (-rw-------)
	I0731 19:21:50.765271  436895 main.go:141] libmachine: (test-preload-392764) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.166 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:21:50.765278  436895 main.go:141] libmachine: (test-preload-392764) DBG | About to run SSH command:
	I0731 19:21:50.765287  436895 main.go:141] libmachine: (test-preload-392764) DBG | exit 0
	I0731 19:21:50.888803  436895 main.go:141] libmachine: (test-preload-392764) DBG | SSH cmd err, output: <nil>: 
	I0731 19:21:50.889287  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetConfigRaw
	I0731 19:21:50.890046  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetIP
	I0731 19:21:50.892893  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.893269  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:50.893301  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.893540  436895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/config.json ...
	I0731 19:21:50.893789  436895 machine.go:94] provisionDockerMachine start ...
	I0731 19:21:50.893813  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:50.894050  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:50.896605  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.896925  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:50.896951  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:50.897041  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:50.897226  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:50.897382  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:50.897523  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:50.897692  436895 main.go:141] libmachine: Using SSH client type: native
	I0731 19:21:50.897885  436895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0731 19:21:50.897896  436895 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:21:50.996762  436895 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 19:21:50.996807  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetMachineName
	I0731 19:21:50.997094  436895 buildroot.go:166] provisioning hostname "test-preload-392764"
	I0731 19:21:50.997124  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetMachineName
	I0731 19:21:50.997426  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:50.999995  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.000350  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.000398  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.000545  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.000731  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.000882  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.000998  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.001154  436895 main.go:141] libmachine: Using SSH client type: native
	I0731 19:21:51.001375  436895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0731 19:21:51.001389  436895 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-392764 && echo "test-preload-392764" | sudo tee /etc/hostname
	I0731 19:21:51.115371  436895 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-392764
	
	I0731 19:21:51.115405  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.117989  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.118318  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.118361  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.118536  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.118726  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.118961  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.119155  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.119361  436895 main.go:141] libmachine: Using SSH client type: native
	I0731 19:21:51.119530  436895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0731 19:21:51.119547  436895 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-392764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-392764/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-392764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:21:51.229576  436895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:21:51.229608  436895 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:21:51.229637  436895 buildroot.go:174] setting up certificates
	I0731 19:21:51.229648  436895 provision.go:84] configureAuth start
	I0731 19:21:51.229665  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetMachineName
	I0731 19:21:51.230010  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetIP
	I0731 19:21:51.232613  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.232962  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.233004  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.233126  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.235126  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.235486  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.235516  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.235662  436895 provision.go:143] copyHostCerts
	I0731 19:21:51.235724  436895 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:21:51.235738  436895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:21:51.235813  436895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:21:51.235959  436895 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:21:51.235972  436895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:21:51.236009  436895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:21:51.236092  436895 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:21:51.236102  436895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:21:51.236134  436895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:21:51.236200  436895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.test-preload-392764 san=[127.0.0.1 192.168.39.166 localhost minikube test-preload-392764]
	I0731 19:21:51.313199  436895 provision.go:177] copyRemoteCerts
	I0731 19:21:51.313280  436895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:21:51.313311  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.316018  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.316307  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.316338  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.316512  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.316699  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.316876  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.317028  436895 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa Username:docker}
	I0731 19:21:51.399145  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:21:51.423443  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 19:21:51.446782  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 19:21:51.471047  436895 provision.go:87] duration metric: took 241.381908ms to configureAuth
	I0731 19:21:51.471089  436895 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:21:51.471301  436895 config.go:182] Loaded profile config "test-preload-392764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 19:21:51.471398  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.474342  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.474732  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.474756  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.474949  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.475156  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.475387  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.475544  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.475727  436895 main.go:141] libmachine: Using SSH client type: native
	I0731 19:21:51.475951  436895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0731 19:21:51.475971  436895 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:21:51.742241  436895 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:21:51.742267  436895 machine.go:97] duration metric: took 848.461374ms to provisionDockerMachine
	I0731 19:21:51.742292  436895 start.go:293] postStartSetup for "test-preload-392764" (driver="kvm2")
	I0731 19:21:51.742304  436895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:21:51.742321  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:51.742654  436895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:21:51.742694  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.745441  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.745751  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.745778  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.745888  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.746084  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.746296  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.746421  436895 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa Username:docker}
	I0731 19:21:51.827661  436895 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:21:51.832120  436895 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:21:51.832169  436895 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:21:51.832240  436895 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:21:51.832316  436895 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:21:51.832418  436895 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:21:51.842244  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:21:51.865822  436895 start.go:296] duration metric: took 123.514347ms for postStartSetup
	I0731 19:21:51.865887  436895 fix.go:56] duration metric: took 19.615715245s for fixHost
	I0731 19:21:51.865910  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.868767  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.869301  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.869330  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.869497  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.869708  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.869877  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.869983  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.870105  436895 main.go:141] libmachine: Using SSH client type: native
	I0731 19:21:51.870316  436895 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.166 22 <nil> <nil>}
	I0731 19:21:51.870338  436895 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:21:51.969023  436895 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722453711.945202186
	
	I0731 19:21:51.969045  436895 fix.go:216] guest clock: 1722453711.945202186
	I0731 19:21:51.969056  436895 fix.go:229] Guest: 2024-07-31 19:21:51.945202186 +0000 UTC Remote: 2024-07-31 19:21:51.865891869 +0000 UTC m=+33.224807889 (delta=79.310317ms)
	I0731 19:21:51.969076  436895 fix.go:200] guest clock delta is within tolerance: 79.310317ms
	I0731 19:21:51.969081  436895 start.go:83] releasing machines lock for "test-preload-392764", held for 19.7189303s
	I0731 19:21:51.969101  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:51.969387  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetIP
	I0731 19:21:51.971751  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.972080  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.972148  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.972261  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:51.972756  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:51.972920  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:21:51.973047  436895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:21:51.973090  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.973168  436895 ssh_runner.go:195] Run: cat /version.json
	I0731 19:21:51.973192  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:21:51.975810  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.975858  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.976143  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.976170  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.976206  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:51.976369  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:51.976402  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.976531  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:21:51.976596  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.976691  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:21:51.976823  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.976877  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:21:51.977011  436895 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa Username:docker}
	I0731 19:21:51.977000  436895 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa Username:docker}
	I0731 19:21:52.069453  436895 ssh_runner.go:195] Run: systemctl --version
	I0731 19:21:52.075422  436895 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:21:52.221009  436895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:21:52.227380  436895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:21:52.227454  436895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:21:52.244393  436895 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:21:52.244423  436895 start.go:495] detecting cgroup driver to use...
	I0731 19:21:52.244511  436895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:21:52.261013  436895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:21:52.274806  436895 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:21:52.274872  436895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:21:52.289215  436895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:21:52.302697  436895 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:21:52.420702  436895 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:21:52.563780  436895 docker.go:233] disabling docker service ...
	I0731 19:21:52.563856  436895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:21:52.579327  436895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:21:52.593297  436895 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:21:52.736314  436895 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:21:52.854126  436895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:21:52.869387  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:21:52.889035  436895 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 19:21:52.889114  436895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.900139  436895 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:21:52.900226  436895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.912273  436895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.923357  436895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.934279  436895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:21:52.945391  436895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.955757  436895 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.972953  436895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:21:52.983569  436895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:21:52.993350  436895 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:21:52.993430  436895 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:21:53.008801  436895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:21:53.020857  436895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:21:53.157269  436895 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:21:53.294826  436895 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:21:53.294922  436895 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:21:53.299403  436895 start.go:563] Will wait 60s for crictl version
	I0731 19:21:53.299459  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:21:53.303584  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:21:53.344158  436895 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:21:53.344242  436895 ssh_runner.go:195] Run: crio --version
	I0731 19:21:53.373201  436895 ssh_runner.go:195] Run: crio --version
	I0731 19:21:53.405247  436895 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0731 19:21:53.406815  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetIP
	I0731 19:21:53.409520  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:53.410152  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:21:53.410183  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:21:53.410490  436895 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:21:53.415113  436895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:21:53.428132  436895 kubeadm.go:883] updating cluster {Name:test-preload-392764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-392764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:21:53.428258  436895 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 19:21:53.428331  436895 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:21:53.469253  436895 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 19:21:53.469326  436895 ssh_runner.go:195] Run: which lz4
	I0731 19:21:53.473296  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:21:53.477457  436895 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:21:53.477483  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0731 19:21:54.995947  436895 crio.go:462] duration metric: took 1.522692598s to copy over tarball
	I0731 19:21:54.996034  436895 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:21:57.370260  436895 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.374192173s)
	I0731 19:21:57.370286  436895 crio.go:469] duration metric: took 2.374310761s to extract the tarball
	I0731 19:21:57.370294  436895 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:21:57.412054  436895 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:21:57.453707  436895 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 19:21:57.453735  436895 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 19:21:57.453794  436895 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:21:57.453840  436895 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 19:21:57.453866  436895 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 19:21:57.453919  436895 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 19:21:57.453956  436895 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 19:21:57.453998  436895 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 19:21:57.453950  436895 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0731 19:21:57.453926  436895 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 19:21:57.455375  436895 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 19:21:57.455395  436895 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:21:57.455406  436895 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 19:21:57.455408  436895 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 19:21:57.455375  436895 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 19:21:57.455377  436895 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 19:21:57.455375  436895 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 19:21:57.455383  436895 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 19:21:57.598793  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 19:21:57.637747  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0731 19:21:57.643833  436895 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0731 19:21:57.643887  436895 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 19:21:57.643944  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:21:57.666548  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0731 19:21:57.678211  436895 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0731 19:21:57.678252  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 19:21:57.678261  436895 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 19:21:57.678302  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:21:57.684310  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 19:21:57.717604  436895 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0731 19:21:57.717661  436895 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 19:21:57.717686  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 19:21:57.717705  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:21:57.731427  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 19:21:57.731551  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 19:21:57.755838  436895 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0731 19:21:57.755899  436895 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 19:21:57.755943  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:21:57.755952  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 19:21:57.774336  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0731 19:21:57.774360  436895 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 19:21:57.774392  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 19:21:57.774404  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 19:21:57.774469  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 19:21:57.789459  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0731 19:21:57.802819  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 19:21:57.812182  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 19:21:57.841610  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 19:21:57.841711  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 19:21:57.841747  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 19:21:58.406392  436895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:22:01.050862  436895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (3.276433477s)
	I0731 19:22:01.050905  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0731 19:22:01.050976  436895 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4: (3.27648271s)
	I0731 19:22:01.051012  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0731 19:22:01.051025  436895 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 19:22:01.051065  436895 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4: (3.261560872s)
	I0731 19:22:01.051104  436895 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0731 19:22:01.051131  436895 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 19:22:01.051166  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:22:01.051104  436895 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (3.248253154s)
	I0731 19:22:01.051204  436895 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.20943366s)
	I0731 19:22:01.051075  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 19:22:01.051224  436895 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0731 19:22:01.051245  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0731 19:22:01.051253  436895 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 19:22:01.051163  436895 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7: (3.238953907s)
	I0731 19:22:01.051261  436895 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.209534767s)
	I0731 19:22:01.051280  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0731 19:22:01.051283  436895 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0731 19:22:01.051341  436895 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0731 19:22:01.051287  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:22:01.051351  436895 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.644919955s)
	I0731 19:22:01.051343  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 19:22:01.051376  436895 ssh_runner.go:195] Run: which crictl
	I0731 19:22:01.062415  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 19:22:01.915297  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0731 19:22:01.915349  436895 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 19:22:01.915427  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 19:22:01.915475  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 19:22:01.915427  436895 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 19:22:01.915428  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0731 19:22:01.915496  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 19:22:01.915576  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 19:22:01.920481  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0731 19:22:02.690199  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0731 19:22:02.690235  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0731 19:22:02.690261  436895 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 19:22:02.690307  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0731 19:22:02.690330  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 19:22:02.690366  436895 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 19:22:02.690469  436895 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 19:22:04.753444  436895 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.063083233s)
	I0731 19:22:04.753492  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0731 19:22:04.753546  436895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.063207346s)
	I0731 19:22:04.753574  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 19:22:04.753580  436895 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (2.063091962s)
	I0731 19:22:04.753593  436895 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0731 19:22:04.753604  436895 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 19:22:04.753655  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0731 19:22:05.199946  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 19:22:05.199982  436895 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 19:22:05.200041  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 19:22:05.644901  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0731 19:22:05.644931  436895 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 19:22:05.644995  436895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0731 19:22:05.786533  436895 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0731 19:22:05.786592  436895 cache_images.go:123] Successfully loaded all cached images
	I0731 19:22:05.786602  436895 cache_images.go:92] duration metric: took 8.332854203s to LoadCachedImages
	I0731 19:22:05.786618  436895 kubeadm.go:934] updating node { 192.168.39.166 8443 v1.24.4 crio true true} ...
	I0731 19:22:05.786742  436895 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-392764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.166
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-392764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:22:05.786813  436895 ssh_runner.go:195] Run: crio config
	I0731 19:22:05.844384  436895 cni.go:84] Creating CNI manager for ""
	I0731 19:22:05.844417  436895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:22:05.844432  436895 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:22:05.844451  436895 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.166 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-392764 NodeName:test-preload-392764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.166"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.166 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:22:05.844626  436895 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.166
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-392764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.166
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.166"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:22:05.844714  436895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0731 19:22:05.855772  436895 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:22:05.855856  436895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:22:05.867679  436895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0731 19:22:05.893688  436895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:22:05.910016  436895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0731 19:22:05.927539  436895 ssh_runner.go:195] Run: grep 192.168.39.166	control-plane.minikube.internal$ /etc/hosts
	I0731 19:22:05.931447  436895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.166	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:22:05.943752  436895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:22:06.072516  436895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:22:06.090586  436895 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764 for IP: 192.168.39.166
	I0731 19:22:06.090612  436895 certs.go:194] generating shared ca certs ...
	I0731 19:22:06.090632  436895 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:22:06.090872  436895 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:22:06.090956  436895 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:22:06.090973  436895 certs.go:256] generating profile certs ...
	I0731 19:22:06.091101  436895 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/client.key
	I0731 19:22:06.091191  436895 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/apiserver.key.368ba4ae
	I0731 19:22:06.091259  436895 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/proxy-client.key
	I0731 19:22:06.091424  436895 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:22:06.091466  436895 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:22:06.091480  436895 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:22:06.091516  436895 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:22:06.091548  436895 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:22:06.091579  436895 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:22:06.091639  436895 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:22:06.092728  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:22:06.138625  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:22:06.170314  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:22:06.201230  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:22:06.229768  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 19:22:06.266171  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:22:06.299010  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:22:06.337493  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:22:06.362271  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:22:06.386533  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:22:06.409740  436895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:22:06.433372  436895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:22:06.450154  436895 ssh_runner.go:195] Run: openssl version
	I0731 19:22:06.456272  436895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:22:06.467815  436895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:22:06.472465  436895 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:22:06.472521  436895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:22:06.478443  436895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:22:06.489317  436895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:22:06.500092  436895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:22:06.504440  436895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:22:06.504497  436895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:22:06.510214  436895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:22:06.521277  436895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:22:06.532017  436895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:22:06.536776  436895 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:22:06.536826  436895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:22:06.542759  436895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:22:06.554651  436895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:22:06.559141  436895 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:22:06.565042  436895 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:22:06.570859  436895 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:22:06.576983  436895 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:22:06.582769  436895 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:22:06.588456  436895 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:22:06.594061  436895 kubeadm.go:392] StartCluster: {Name:test-preload-392764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-392764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:22:06.594165  436895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:22:06.594207  436895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:22:06.631894  436895 cri.go:89] found id: ""
	I0731 19:22:06.631965  436895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:22:06.642561  436895 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 19:22:06.642581  436895 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 19:22:06.642626  436895 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 19:22:06.652976  436895 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:22:06.653490  436895 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-392764" does not appear in /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:22:06.653632  436895 kubeconfig.go:62] /home/jenkins/minikube-integration/19356-395032/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-392764" cluster setting kubeconfig missing "test-preload-392764" context setting]
	I0731 19:22:06.654009  436895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:22:06.654677  436895 kapi.go:59] client config for test-preload-392764: &rest.Config{Host:"https://192.168.39.166:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 19:22:06.655330  436895 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 19:22:06.665593  436895 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.166
	I0731 19:22:06.665631  436895 kubeadm.go:1160] stopping kube-system containers ...
	I0731 19:22:06.665646  436895 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 19:22:06.665802  436895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:22:06.704268  436895 cri.go:89] found id: ""
	I0731 19:22:06.704361  436895 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 19:22:06.721467  436895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:22:06.731141  436895 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:22:06.731164  436895 kubeadm.go:157] found existing configuration files:
	
	I0731 19:22:06.731217  436895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:22:06.740523  436895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:22:06.740590  436895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:22:06.750837  436895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:22:06.760580  436895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:22:06.760657  436895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:22:06.770955  436895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:22:06.780289  436895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:22:06.780357  436895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:22:06.790498  436895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:22:06.799934  436895 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:22:06.799993  436895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:22:06.809684  436895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:22:06.819801  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:22:06.926313  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:22:07.557998  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:22:07.841820  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:22:07.905061  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:22:07.981818  436895 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:22:07.981940  436895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:22:08.482206  436895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:22:08.982047  436895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:22:09.017504  436895 api_server.go:72] duration metric: took 1.035684804s to wait for apiserver process to appear ...
	I0731 19:22:09.017534  436895 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:22:09.017553  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:09.018003  436895 api_server.go:269] stopped: https://192.168.39.166:8443/healthz: Get "https://192.168.39.166:8443/healthz": dial tcp 192.168.39.166:8443: connect: connection refused
	I0731 19:22:09.517820  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:09.518563  436895 api_server.go:269] stopped: https://192.168.39.166:8443/healthz: Get "https://192.168.39.166:8443/healthz": dial tcp 192.168.39.166:8443: connect: connection refused
	I0731 19:22:10.018116  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:13.080306  436895 api_server.go:279] https://192.168.39.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 19:22:13.080345  436895 api_server.go:103] status: https://192.168.39.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 19:22:13.080363  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:13.107381  436895 api_server.go:279] https://192.168.39.166:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 19:22:13.107423  436895 api_server.go:103] status: https://192.168.39.166:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 19:22:13.517760  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:13.524003  436895 api_server.go:279] https://192.168.39.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 19:22:13.524038  436895 api_server.go:103] status: https://192.168.39.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 19:22:14.017835  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:14.025574  436895 api_server.go:279] https://192.168.39.166:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 19:22:14.025603  436895 api_server.go:103] status: https://192.168.39.166:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 19:22:14.518119  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:14.523620  436895 api_server.go:279] https://192.168.39.166:8443/healthz returned 200:
	ok
	I0731 19:22:14.529794  436895 api_server.go:141] control plane version: v1.24.4
	I0731 19:22:14.529836  436895 api_server.go:131] duration metric: took 5.512282108s to wait for apiserver health ...
	I0731 19:22:14.529846  436895 cni.go:84] Creating CNI manager for ""
	I0731 19:22:14.529852  436895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:22:14.531716  436895 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 19:22:14.533207  436895 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 19:22:14.545677  436895 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 19:22:14.565324  436895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:22:14.575152  436895 system_pods.go:59] 8 kube-system pods found
	I0731 19:22:14.575186  436895 system_pods.go:61] "coredns-6d4b75cb6d-5tcw6" [62aaf428-1af4-4c72-a16a-d5c3a468fb66] Running
	I0731 19:22:14.575190  436895 system_pods.go:61] "coredns-6d4b75cb6d-ch5zx" [270f9430-2539-4466-be48-99e94995e9c7] Running
	I0731 19:22:14.575200  436895 system_pods.go:61] "etcd-test-preload-392764" [8b22c4c5-cd3f-4734-a090-1b8fc93b7965] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 19:22:14.575205  436895 system_pods.go:61] "kube-apiserver-test-preload-392764" [207db934-9f89-4862-83ff-5328339663a2] Running
	I0731 19:22:14.575214  436895 system_pods.go:61] "kube-controller-manager-test-preload-392764" [4555ac46-f1b7-4325-b16f-85bd3c77f390] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 19:22:14.575219  436895 system_pods.go:61] "kube-proxy-dwr26" [340ecf7a-4c7e-4904-afe8-1ae586d2b5fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 19:22:14.575223  436895 system_pods.go:61] "kube-scheduler-test-preload-392764" [19ecdf90-358c-495b-86f9-de26ba21f0f4] Running
	I0731 19:22:14.575227  436895 system_pods.go:61] "storage-provisioner" [0f818e8c-8d97-4e00-b98a-0a795c9f1e7c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 19:22:14.575234  436895 system_pods.go:74] duration metric: took 9.880182ms to wait for pod list to return data ...
	I0731 19:22:14.575245  436895 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:22:14.578864  436895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:22:14.578892  436895 node_conditions.go:123] node cpu capacity is 2
	I0731 19:22:14.578903  436895 node_conditions.go:105] duration metric: took 3.652818ms to run NodePressure ...
	I0731 19:22:14.578920  436895 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:22:14.752533  436895 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 19:22:14.758742  436895 kubeadm.go:739] kubelet initialised
	I0731 19:22:14.758771  436895 kubeadm.go:740] duration metric: took 6.207152ms waiting for restarted kubelet to initialise ...
	I0731 19:22:14.758781  436895 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:22:14.763505  436895 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:14.769496  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.769522  436895 pod_ready.go:81] duration metric: took 5.990328ms for pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:14.769531  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.769537  436895 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-ch5zx" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:14.773800  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "coredns-6d4b75cb6d-ch5zx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.773826  436895 pod_ready.go:81] duration metric: took 4.280426ms for pod "coredns-6d4b75cb6d-ch5zx" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:14.773835  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "coredns-6d4b75cb6d-ch5zx" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.773841  436895 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:14.784067  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "etcd-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.784090  436895 pod_ready.go:81] duration metric: took 10.23968ms for pod "etcd-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:14.784098  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "etcd-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.784105  436895 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:14.970320  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "kube-apiserver-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.970350  436895 pod_ready.go:81] duration metric: took 186.237327ms for pod "kube-apiserver-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:14.970360  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "kube-apiserver-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:14.970367  436895 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:15.369577  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:15.369624  436895 pod_ready.go:81] duration metric: took 399.242675ms for pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:15.369643  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:15.369653  436895 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-dwr26" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:15.771728  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "kube-proxy-dwr26" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:15.771763  436895 pod_ready.go:81] duration metric: took 402.097871ms for pod "kube-proxy-dwr26" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:15.771777  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "kube-proxy-dwr26" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:15.771786  436895 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:16.168815  436895 pod_ready.go:97] node "test-preload-392764" hosting pod "kube-scheduler-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:16.168845  436895 pod_ready.go:81] duration metric: took 397.050518ms for pod "kube-scheduler-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	E0731 19:22:16.168853  436895 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-392764" hosting pod "kube-scheduler-test-preload-392764" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:16.168860  436895 pod_ready.go:38] duration metric: took 1.410068629s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:22:16.168879  436895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:22:16.181455  436895 ops.go:34] apiserver oom_adj: -16
	I0731 19:22:16.181478  436895 kubeadm.go:597] duration metric: took 9.538891916s to restartPrimaryControlPlane
	I0731 19:22:16.181486  436895 kubeadm.go:394] duration metric: took 9.587435148s to StartCluster
	I0731 19:22:16.181505  436895 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:22:16.181577  436895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:22:16.182197  436895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:22:16.182414  436895 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.166 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:22:16.182488  436895 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 19:22:16.182588  436895 addons.go:69] Setting storage-provisioner=true in profile "test-preload-392764"
	I0731 19:22:16.182610  436895 addons.go:69] Setting default-storageclass=true in profile "test-preload-392764"
	I0731 19:22:16.182622  436895 addons.go:234] Setting addon storage-provisioner=true in "test-preload-392764"
	W0731 19:22:16.182633  436895 addons.go:243] addon storage-provisioner should already be in state true
	I0731 19:22:16.182646  436895 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-392764"
	I0731 19:22:16.182680  436895 host.go:66] Checking if "test-preload-392764" exists ...
	I0731 19:22:16.182677  436895 config.go:182] Loaded profile config "test-preload-392764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 19:22:16.182994  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:22:16.183004  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:22:16.183035  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:22:16.183134  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:22:16.185447  436895 out.go:177] * Verifying Kubernetes components...
	I0731 19:22:16.187107  436895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:22:16.198450  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38607
	I0731 19:22:16.198842  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32769
	I0731 19:22:16.198853  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:22:16.199242  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:22:16.199388  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:22:16.199409  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:22:16.199714  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:22:16.199730  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:22:16.199768  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:22:16.200016  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:22:16.200331  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetState
	I0731 19:22:16.200371  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:22:16.200428  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:22:16.202952  436895 kapi.go:59] client config for test-preload-392764: &rest.Config{Host:"https://192.168.39.166:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/client.crt", KeyFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/profiles/test-preload-392764/client.key", CAFile:"/home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02f40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 19:22:16.203281  436895 addons.go:234] Setting addon default-storageclass=true in "test-preload-392764"
	W0731 19:22:16.203304  436895 addons.go:243] addon default-storageclass should already be in state true
	I0731 19:22:16.203333  436895 host.go:66] Checking if "test-preload-392764" exists ...
	I0731 19:22:16.203714  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:22:16.203772  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:22:16.216459  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0731 19:22:16.217054  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:22:16.217555  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:22:16.217581  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:22:16.217974  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:22:16.218201  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetState
	I0731 19:22:16.219141  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38255
	I0731 19:22:16.219597  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:22:16.220179  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:22:16.220209  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:22:16.220222  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:22:16.220569  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:22:16.221184  436895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:22:16.221229  436895 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:22:16.222608  436895 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:22:16.224276  436895 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:22:16.224299  436895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 19:22:16.224319  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:22:16.227639  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:22:16.228115  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:22:16.228142  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:22:16.228345  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:22:16.228547  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:22:16.228716  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:22:16.228871  436895 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa Username:docker}
	I0731 19:22:16.236799  436895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33019
	I0731 19:22:16.237237  436895 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:22:16.237753  436895 main.go:141] libmachine: Using API Version  1
	I0731 19:22:16.237784  436895 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:22:16.238137  436895 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:22:16.238363  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetState
	I0731 19:22:16.239950  436895 main.go:141] libmachine: (test-preload-392764) Calling .DriverName
	I0731 19:22:16.240183  436895 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 19:22:16.240203  436895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 19:22:16.240222  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHHostname
	I0731 19:22:16.243041  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:22:16.243528  436895 main.go:141] libmachine: (test-preload-392764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:84:f9", ip: ""} in network mk-test-preload-392764: {Iface:virbr1 ExpiryTime:2024-07-31 20:21:43 +0000 UTC Type:0 Mac:52:54:00:41:84:f9 Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:test-preload-392764 Clientid:01:52:54:00:41:84:f9}
	I0731 19:22:16.243553  436895 main.go:141] libmachine: (test-preload-392764) DBG | domain test-preload-392764 has defined IP address 192.168.39.166 and MAC address 52:54:00:41:84:f9 in network mk-test-preload-392764
	I0731 19:22:16.243677  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHPort
	I0731 19:22:16.243916  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHKeyPath
	I0731 19:22:16.244075  436895 main.go:141] libmachine: (test-preload-392764) Calling .GetSSHUsername
	I0731 19:22:16.244237  436895 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/test-preload-392764/id_rsa Username:docker}
	I0731 19:22:16.372181  436895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:22:16.391064  436895 node_ready.go:35] waiting up to 6m0s for node "test-preload-392764" to be "Ready" ...
	I0731 19:22:16.521474  436895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 19:22:16.527413  436895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 19:22:17.537486  436895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010036866s)
	I0731 19:22:17.537547  436895 main.go:141] libmachine: Making call to close driver server
	I0731 19:22:17.537560  436895 main.go:141] libmachine: (test-preload-392764) Calling .Close
	I0731 19:22:17.537573  436895 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.016061973s)
	I0731 19:22:17.537609  436895 main.go:141] libmachine: Making call to close driver server
	I0731 19:22:17.537623  436895 main.go:141] libmachine: (test-preload-392764) Calling .Close
	I0731 19:22:17.537844  436895 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:22:17.537907  436895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:22:17.537927  436895 main.go:141] libmachine: (test-preload-392764) DBG | Closing plugin on server side
	I0731 19:22:17.537935  436895 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:22:17.537936  436895 main.go:141] libmachine: (test-preload-392764) DBG | Closing plugin on server side
	I0731 19:22:17.537949  436895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:22:17.537955  436895 main.go:141] libmachine: Making call to close driver server
	I0731 19:22:17.537967  436895 main.go:141] libmachine: (test-preload-392764) Calling .Close
	I0731 19:22:17.537959  436895 main.go:141] libmachine: Making call to close driver server
	I0731 19:22:17.538013  436895 main.go:141] libmachine: (test-preload-392764) Calling .Close
	I0731 19:22:17.538197  436895 main.go:141] libmachine: (test-preload-392764) DBG | Closing plugin on server side
	I0731 19:22:17.538234  436895 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:22:17.538245  436895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:22:17.538524  436895 main.go:141] libmachine: (test-preload-392764) DBG | Closing plugin on server side
	I0731 19:22:17.538546  436895 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:22:17.538552  436895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:22:17.545038  436895 main.go:141] libmachine: Making call to close driver server
	I0731 19:22:17.545058  436895 main.go:141] libmachine: (test-preload-392764) Calling .Close
	I0731 19:22:17.545302  436895 main.go:141] libmachine: Successfully made call to close driver server
	I0731 19:22:17.545321  436895 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 19:22:17.545326  436895 main.go:141] libmachine: (test-preload-392764) DBG | Closing plugin on server side
	I0731 19:22:17.547464  436895 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 19:22:17.548857  436895 addons.go:510] duration metric: took 1.366382893s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 19:22:18.395202  436895 node_ready.go:53] node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:20.395552  436895 node_ready.go:53] node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:22.894395  436895 node_ready.go:53] node "test-preload-392764" has status "Ready":"False"
	I0731 19:22:23.394997  436895 node_ready.go:49] node "test-preload-392764" has status "Ready":"True"
	I0731 19:22:23.395024  436895 node_ready.go:38] duration metric: took 7.003928472s for node "test-preload-392764" to be "Ready" ...
	I0731 19:22:23.395036  436895 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:22:23.400158  436895 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:23.410425  436895 pod_ready.go:92] pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace has status "Ready":"True"
	I0731 19:22:23.410447  436895 pod_ready.go:81] duration metric: took 10.266441ms for pod "coredns-6d4b75cb6d-5tcw6" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:23.410455  436895 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:25.428235  436895 pod_ready.go:102] pod "etcd-test-preload-392764" in "kube-system" namespace has status "Ready":"False"
	I0731 19:22:27.919395  436895 pod_ready.go:102] pod "etcd-test-preload-392764" in "kube-system" namespace has status "Ready":"False"
	I0731 19:22:28.416479  436895 pod_ready.go:92] pod "etcd-test-preload-392764" in "kube-system" namespace has status "Ready":"True"
	I0731 19:22:28.416504  436895 pod_ready.go:81] duration metric: took 5.006043036s for pod "etcd-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.416515  436895 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.421016  436895 pod_ready.go:92] pod "kube-apiserver-test-preload-392764" in "kube-system" namespace has status "Ready":"True"
	I0731 19:22:28.421038  436895 pod_ready.go:81] duration metric: took 4.514792ms for pod "kube-apiserver-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.421047  436895 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.425180  436895 pod_ready.go:92] pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace has status "Ready":"True"
	I0731 19:22:28.425201  436895 pod_ready.go:81] duration metric: took 4.145781ms for pod "kube-controller-manager-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.425212  436895 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dwr26" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.430068  436895 pod_ready.go:92] pod "kube-proxy-dwr26" in "kube-system" namespace has status "Ready":"True"
	I0731 19:22:28.430093  436895 pod_ready.go:81] duration metric: took 4.874922ms for pod "kube-proxy-dwr26" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.430101  436895 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.435223  436895 pod_ready.go:92] pod "kube-scheduler-test-preload-392764" in "kube-system" namespace has status "Ready":"True"
	I0731 19:22:28.435243  436895 pod_ready.go:81] duration metric: took 5.135969ms for pod "kube-scheduler-test-preload-392764" in "kube-system" namespace to be "Ready" ...
	I0731 19:22:28.435251  436895 pod_ready.go:38] duration metric: took 5.040203801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:22:28.435264  436895 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:22:28.435310  436895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:22:28.450006  436895 api_server.go:72] duration metric: took 12.267564541s to wait for apiserver process to appear ...
	I0731 19:22:28.450033  436895 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:22:28.450051  436895 api_server.go:253] Checking apiserver healthz at https://192.168.39.166:8443/healthz ...
	I0731 19:22:28.454797  436895 api_server.go:279] https://192.168.39.166:8443/healthz returned 200:
	ok
	I0731 19:22:28.456136  436895 api_server.go:141] control plane version: v1.24.4
	I0731 19:22:28.456157  436895 api_server.go:131] duration metric: took 6.117247ms to wait for apiserver health ...
	I0731 19:22:28.456165  436895 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:22:28.617738  436895 system_pods.go:59] 7 kube-system pods found
	I0731 19:22:28.617779  436895 system_pods.go:61] "coredns-6d4b75cb6d-5tcw6" [62aaf428-1af4-4c72-a16a-d5c3a468fb66] Running
	I0731 19:22:28.617790  436895 system_pods.go:61] "etcd-test-preload-392764" [8b22c4c5-cd3f-4734-a090-1b8fc93b7965] Running
	I0731 19:22:28.617795  436895 system_pods.go:61] "kube-apiserver-test-preload-392764" [207db934-9f89-4862-83ff-5328339663a2] Running
	I0731 19:22:28.617807  436895 system_pods.go:61] "kube-controller-manager-test-preload-392764" [4555ac46-f1b7-4325-b16f-85bd3c77f390] Running
	I0731 19:22:28.617813  436895 system_pods.go:61] "kube-proxy-dwr26" [340ecf7a-4c7e-4904-afe8-1ae586d2b5fd] Running
	I0731 19:22:28.617818  436895 system_pods.go:61] "kube-scheduler-test-preload-392764" [19ecdf90-358c-495b-86f9-de26ba21f0f4] Running
	I0731 19:22:28.617822  436895 system_pods.go:61] "storage-provisioner" [0f818e8c-8d97-4e00-b98a-0a795c9f1e7c] Running
	I0731 19:22:28.617830  436895 system_pods.go:74] duration metric: took 161.658449ms to wait for pod list to return data ...
	I0731 19:22:28.617841  436895 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:22:28.814607  436895 default_sa.go:45] found service account: "default"
	I0731 19:22:28.814641  436895 default_sa.go:55] duration metric: took 196.791662ms for default service account to be created ...
	I0731 19:22:28.814650  436895 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:22:29.017787  436895 system_pods.go:86] 7 kube-system pods found
	I0731 19:22:29.017832  436895 system_pods.go:89] "coredns-6d4b75cb6d-5tcw6" [62aaf428-1af4-4c72-a16a-d5c3a468fb66] Running
	I0731 19:22:29.017840  436895 system_pods.go:89] "etcd-test-preload-392764" [8b22c4c5-cd3f-4734-a090-1b8fc93b7965] Running
	I0731 19:22:29.017847  436895 system_pods.go:89] "kube-apiserver-test-preload-392764" [207db934-9f89-4862-83ff-5328339663a2] Running
	I0731 19:22:29.017857  436895 system_pods.go:89] "kube-controller-manager-test-preload-392764" [4555ac46-f1b7-4325-b16f-85bd3c77f390] Running
	I0731 19:22:29.017873  436895 system_pods.go:89] "kube-proxy-dwr26" [340ecf7a-4c7e-4904-afe8-1ae586d2b5fd] Running
	I0731 19:22:29.017881  436895 system_pods.go:89] "kube-scheduler-test-preload-392764" [19ecdf90-358c-495b-86f9-de26ba21f0f4] Running
	I0731 19:22:29.017888  436895 system_pods.go:89] "storage-provisioner" [0f818e8c-8d97-4e00-b98a-0a795c9f1e7c] Running
	I0731 19:22:29.017899  436895 system_pods.go:126] duration metric: took 203.24301ms to wait for k8s-apps to be running ...
	I0731 19:22:29.017912  436895 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:22:29.017979  436895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:22:29.032874  436895 system_svc.go:56] duration metric: took 14.954311ms WaitForService to wait for kubelet
	I0731 19:22:29.032905  436895 kubeadm.go:582] duration metric: took 12.850465883s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:22:29.032931  436895 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:22:29.214451  436895 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:22:29.214483  436895 node_conditions.go:123] node cpu capacity is 2
	I0731 19:22:29.214496  436895 node_conditions.go:105] duration metric: took 181.559109ms to run NodePressure ...
	I0731 19:22:29.214511  436895 start.go:241] waiting for startup goroutines ...
	I0731 19:22:29.214518  436895 start.go:246] waiting for cluster config update ...
	I0731 19:22:29.214531  436895 start.go:255] writing updated cluster config ...
	I0731 19:22:29.214881  436895 ssh_runner.go:195] Run: rm -f paused
	I0731 19:22:29.263882  436895 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0731 19:22:29.265840  436895 out.go:177] 
	W0731 19:22:29.267197  436895 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0731 19:22:29.268563  436895 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0731 19:22:29.270022  436895 out.go:177] * Done! kubectl is now configured to use "test-preload-392764" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.192287786Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cd9906826416c6d46016e24771cd5820f287329e4b0f1acfc44b3ec18bf99b89,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-5tcw6,Uid:62aaf428-1af4-4c72-a16a-d5c3a468fb66,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722453742304154606,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tcw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62aaf428-1af4-4c72-a16a-d5c3a468fb66,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:22:14.000771661Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f6638eaa5180f90aaf7932bbea7f49376c06d52abb9119c1f3c4b87a181fbbe8,Metadata:&PodSandboxMetadata{Name:kube-proxy-dwr26,Uid:340ecf7a-4c7e-4904-afe8-1ae586d2b5fd,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1722453735509913655,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dwr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 340ecf7a-4c7e-4904-afe8-1ae586d2b5fd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T19:22:14.000767344Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d7433ccc790b83b1df603d5ecdf10b5a35cefc0101f9d6cabcb0512215b15f8e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0f818e8c-8d97-4e00-b98a-0a795c9f1e7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722453735221749700,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f818e8c-8d97-4e00-b98a-0a79
5c9f1e7c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T19:22:14.000770266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27f84f4f2baa474ae607e8edac2e8647642dafcd6804c380e7e83aa075162d9d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-392764,Ui
d:5ce4319e33b4586ad5cc013cf4a61360,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722453728532515083,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce4319e33b4586ad5cc013cf4a61360,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ce4319e33b4586ad5cc013cf4a61360,kubernetes.io/config.seen: 2024-07-31T19:22:07.984017234Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ead168c782386e68518d4ea57c845895f15cd0dc9ade02d1b05d635fb257908,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-392764,Uid:86f5568298a59722b980dc74c40a4d43,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722453728527895666,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-392764,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f5568298a59722b980dc74c40a4d43,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 86f5568298a59722b980dc74c40a4d43,kubernetes.io/config.seen: 2024-07-31T19:22:07.984018152Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d40aea923c80f4987c6d8a788ab531afeac7e5cb79e9ee153b5c66834862c997,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-392764,Uid:d6edac96eff9c941e4f41ada0ef15be0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722453728520766648,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edac96eff9c941e4f41ada0ef15be0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.166:8443,kubernetes.io/config.hash: d6edac96eff9c941e4f41ada0ef15be0,kub
ernetes.io/config.seen: 2024-07-31T19:22:07.984015596Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1d0b069d241b646b559d4665af173580182672e8a32158aaad165f45199b831f,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-392764,Uid:d655049ffcdcd3f369b6e51e796b7e09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722453728516737925,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d655049ffcdcd3f369b6e51e796b7e09,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.166:2379,kubernetes.io/config.hash: d655049ffcdcd3f369b6e51e796b7e09,kubernetes.io/config.seen: 2024-07-31T19:22:07.983975420Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0b13b7de-9514-43ed-806b-d8186646d4f2 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.192825693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15cc78ed-e9f1-4300-aacb-ced6049030c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.192879706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15cc78ed-e9f1-4300-aacb-ced6049030c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.193640700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5bb5e4622c84af360b278475854ab8d95d312bc1eb32f3aa57d6ab8624ffd90,PodSandboxId:cd9906826416c6d46016e24771cd5820f287329e4b0f1acfc44b3ec18bf99b89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722453742525970025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tcw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62aaf428-1af4-4c72-a16a-d5c3a468fb66,},Annotations:map[string]string{io.kubernetes.container.hash: 2a53116b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cec53156a3f3ad4b10a531a85725996035799a3d5e82d2f43a9639115ae585,PodSandboxId:f6638eaa5180f90aaf7932bbea7f49376c06d52abb9119c1f3c4b87a181fbbe8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722453735591866408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 340ecf7a-4c7e-4904-afe8-1ae586d2b5fd,},Annotations:map[string]string{io.kubernetes.container.hash: b762596f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f687501eee2da197fc2c4afd7050c2d13be7585c408d3cbfc70b5f29c5a60c46,PodSandboxId:d7433ccc790b83b1df603d5ecdf10b5a35cefc0101f9d6cabcb0512215b15f8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453735359514980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f
818e8c-8d97-4e00-b98a-0a795c9f1e7c,},Annotations:map[string]string{io.kubernetes.container.hash: e0a61e5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5001f0ce5616bcbd3c5d02e630440a38f25e361637e0f3a8074eb8d3808e2,PodSandboxId:1d0b069d241b646b559d4665af173580182672e8a32158aaad165f45199b831f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722453728795653247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d655049ffcdcd3f369b6e51e796b7e09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 7a7fb59d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54303de956ab8ca1e880b95842ee67f2c33946a6cf76b3ee77668b543faf659,PodSandboxId:d40aea923c80f4987c6d8a788ab531afeac7e5cb79e9ee153b5c66834862c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722453728766801266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edac96eff9c941e4f41ada0ef15be0,},Annotations:map
[string]string{io.kubernetes.container.hash: b960bba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2951c2fa684cbad405660852fc50e2c0d1fbde416ffab574bc9f512f01e3928,PodSandboxId:4ead168c782386e68518d4ea57c845895f15cd0dc9ade02d1b05d635fb257908,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722453728771438465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f5568298a59722b980dc74c40a4d43,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887429ea3993da981a5c5f2927acfda88a7eba1d7bdb447a61585e0e54c8158d,PodSandboxId:27f84f4f2baa474ae607e8edac2e8647642dafcd6804c380e7e83aa075162d9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722453728695757208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce4319e33b4586ad5cc013cf4a61360,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15cc78ed-e9f1-4300-aacb-ced6049030c4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.197986310Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21b924ca-d6a6-469e-a6e7-66606019ad81 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.198041957Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21b924ca-d6a6-469e-a6e7-66606019ad81 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.199256325Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d3fd5f2-1209-4d44-902b-dcae49dab878 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.199829154Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453750199808326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d3fd5f2-1209-4d44-902b-dcae49dab878 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.200396265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d17391d-8df5-40ad-a856-f19c91abe477 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.200458561Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d17391d-8df5-40ad-a856-f19c91abe477 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.200607729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5bb5e4622c84af360b278475854ab8d95d312bc1eb32f3aa57d6ab8624ffd90,PodSandboxId:cd9906826416c6d46016e24771cd5820f287329e4b0f1acfc44b3ec18bf99b89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722453742525970025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tcw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62aaf428-1af4-4c72-a16a-d5c3a468fb66,},Annotations:map[string]string{io.kubernetes.container.hash: 2a53116b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cec53156a3f3ad4b10a531a85725996035799a3d5e82d2f43a9639115ae585,PodSandboxId:f6638eaa5180f90aaf7932bbea7f49376c06d52abb9119c1f3c4b87a181fbbe8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722453735591866408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 340ecf7a-4c7e-4904-afe8-1ae586d2b5fd,},Annotations:map[string]string{io.kubernetes.container.hash: b762596f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f687501eee2da197fc2c4afd7050c2d13be7585c408d3cbfc70b5f29c5a60c46,PodSandboxId:d7433ccc790b83b1df603d5ecdf10b5a35cefc0101f9d6cabcb0512215b15f8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453735359514980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f
818e8c-8d97-4e00-b98a-0a795c9f1e7c,},Annotations:map[string]string{io.kubernetes.container.hash: e0a61e5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5001f0ce5616bcbd3c5d02e630440a38f25e361637e0f3a8074eb8d3808e2,PodSandboxId:1d0b069d241b646b559d4665af173580182672e8a32158aaad165f45199b831f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722453728795653247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d655049ffcdcd3f369b6e51e796b7e09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 7a7fb59d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54303de956ab8ca1e880b95842ee67f2c33946a6cf76b3ee77668b543faf659,PodSandboxId:d40aea923c80f4987c6d8a788ab531afeac7e5cb79e9ee153b5c66834862c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722453728766801266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edac96eff9c941e4f41ada0ef15be0,},Annotations:map
[string]string{io.kubernetes.container.hash: b960bba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2951c2fa684cbad405660852fc50e2c0d1fbde416ffab574bc9f512f01e3928,PodSandboxId:4ead168c782386e68518d4ea57c845895f15cd0dc9ade02d1b05d635fb257908,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722453728771438465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f5568298a59722b980dc74c40a4d43,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887429ea3993da981a5c5f2927acfda88a7eba1d7bdb447a61585e0e54c8158d,PodSandboxId:27f84f4f2baa474ae607e8edac2e8647642dafcd6804c380e7e83aa075162d9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722453728695757208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce4319e33b4586ad5cc013cf4a61360,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d17391d-8df5-40ad-a856-f19c91abe477 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.237427835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8f09f84-ee03-4e90-9f0a-25c455cdeee1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.237516975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8f09f84-ee03-4e90-9f0a-25c455cdeee1 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.238508345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11cfb553-02d7-4071-b85d-7f3a59d9eca9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.238937424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453750238914094,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11cfb553-02d7-4071-b85d-7f3a59d9eca9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.239578726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b24208ba-6d8a-434e-b9d5-eb58cbc037f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.239632296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b24208ba-6d8a-434e-b9d5-eb58cbc037f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.239793694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5bb5e4622c84af360b278475854ab8d95d312bc1eb32f3aa57d6ab8624ffd90,PodSandboxId:cd9906826416c6d46016e24771cd5820f287329e4b0f1acfc44b3ec18bf99b89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722453742525970025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tcw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62aaf428-1af4-4c72-a16a-d5c3a468fb66,},Annotations:map[string]string{io.kubernetes.container.hash: 2a53116b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cec53156a3f3ad4b10a531a85725996035799a3d5e82d2f43a9639115ae585,PodSandboxId:f6638eaa5180f90aaf7932bbea7f49376c06d52abb9119c1f3c4b87a181fbbe8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722453735591866408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 340ecf7a-4c7e-4904-afe8-1ae586d2b5fd,},Annotations:map[string]string{io.kubernetes.container.hash: b762596f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f687501eee2da197fc2c4afd7050c2d13be7585c408d3cbfc70b5f29c5a60c46,PodSandboxId:d7433ccc790b83b1df603d5ecdf10b5a35cefc0101f9d6cabcb0512215b15f8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453735359514980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f
818e8c-8d97-4e00-b98a-0a795c9f1e7c,},Annotations:map[string]string{io.kubernetes.container.hash: e0a61e5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5001f0ce5616bcbd3c5d02e630440a38f25e361637e0f3a8074eb8d3808e2,PodSandboxId:1d0b069d241b646b559d4665af173580182672e8a32158aaad165f45199b831f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722453728795653247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d655049ffcdcd3f369b6e51e796b7e09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 7a7fb59d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54303de956ab8ca1e880b95842ee67f2c33946a6cf76b3ee77668b543faf659,PodSandboxId:d40aea923c80f4987c6d8a788ab531afeac7e5cb79e9ee153b5c66834862c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722453728766801266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edac96eff9c941e4f41ada0ef15be0,},Annotations:map
[string]string{io.kubernetes.container.hash: b960bba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2951c2fa684cbad405660852fc50e2c0d1fbde416ffab574bc9f512f01e3928,PodSandboxId:4ead168c782386e68518d4ea57c845895f15cd0dc9ade02d1b05d635fb257908,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722453728771438465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f5568298a59722b980dc74c40a4d43,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887429ea3993da981a5c5f2927acfda88a7eba1d7bdb447a61585e0e54c8158d,PodSandboxId:27f84f4f2baa474ae607e8edac2e8647642dafcd6804c380e7e83aa075162d9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722453728695757208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce4319e33b4586ad5cc013cf4a61360,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b24208ba-6d8a-434e-b9d5-eb58cbc037f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.278595110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=515ad8f4-783b-4aa0-bbae-a2b5ac729db4 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.278793977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=515ad8f4-783b-4aa0-bbae-a2b5ac729db4 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.280352534Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5adbefde-1b86-4229-b7de-988bb5cb69cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.280780300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722453750280758130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5adbefde-1b86-4229-b7de-988bb5cb69cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.281439435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=969dfe05-3c97-4460-8c78-1392aceb5d43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.281588951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=969dfe05-3c97-4460-8c78-1392aceb5d43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:22:30 test-preload-392764 crio[711]: time="2024-07-31 19:22:30.282006532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5bb5e4622c84af360b278475854ab8d95d312bc1eb32f3aa57d6ab8624ffd90,PodSandboxId:cd9906826416c6d46016e24771cd5820f287329e4b0f1acfc44b3ec18bf99b89,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722453742525970025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-5tcw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62aaf428-1af4-4c72-a16a-d5c3a468fb66,},Annotations:map[string]string{io.kubernetes.container.hash: 2a53116b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96cec53156a3f3ad4b10a531a85725996035799a3d5e82d2f43a9639115ae585,PodSandboxId:f6638eaa5180f90aaf7932bbea7f49376c06d52abb9119c1f3c4b87a181fbbe8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722453735591866408,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dwr26,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 340ecf7a-4c7e-4904-afe8-1ae586d2b5fd,},Annotations:map[string]string{io.kubernetes.container.hash: b762596f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f687501eee2da197fc2c4afd7050c2d13be7585c408d3cbfc70b5f29c5a60c46,PodSandboxId:d7433ccc790b83b1df603d5ecdf10b5a35cefc0101f9d6cabcb0512215b15f8e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722453735359514980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f
818e8c-8d97-4e00-b98a-0a795c9f1e7c,},Annotations:map[string]string{io.kubernetes.container.hash: e0a61e5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e5001f0ce5616bcbd3c5d02e630440a38f25e361637e0f3a8074eb8d3808e2,PodSandboxId:1d0b069d241b646b559d4665af173580182672e8a32158aaad165f45199b831f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722453728795653247,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d655049ffcdcd3f369b6e51e796b7e09,},Anno
tations:map[string]string{io.kubernetes.container.hash: 7a7fb59d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54303de956ab8ca1e880b95842ee67f2c33946a6cf76b3ee77668b543faf659,PodSandboxId:d40aea923c80f4987c6d8a788ab531afeac7e5cb79e9ee153b5c66834862c997,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722453728766801266,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6edac96eff9c941e4f41ada0ef15be0,},Annotations:map
[string]string{io.kubernetes.container.hash: b960bba7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2951c2fa684cbad405660852fc50e2c0d1fbde416ffab574bc9f512f01e3928,PodSandboxId:4ead168c782386e68518d4ea57c845895f15cd0dc9ade02d1b05d635fb257908,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722453728771438465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86f5568298a59722b980dc74c40a4d43,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:887429ea3993da981a5c5f2927acfda88a7eba1d7bdb447a61585e0e54c8158d,PodSandboxId:27f84f4f2baa474ae607e8edac2e8647642dafcd6804c380e7e83aa075162d9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722453728695757208,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-392764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ce4319e33b4586ad5cc013cf4a61360,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=969dfe05-3c97-4460-8c78-1392aceb5d43 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5bb5e4622c84       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   cd9906826416c       coredns-6d4b75cb6d-5tcw6
	96cec53156a3f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   f6638eaa5180f       kube-proxy-dwr26
	f687501eee2da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   d7433ccc790b8       storage-provisioner
	c0e5001f0ce56       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   1d0b069d241b6       etcd-test-preload-392764
	f2951c2fa684c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   4ead168c78238       kube-scheduler-test-preload-392764
	c54303de956ab       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   d40aea923c80f       kube-apiserver-test-preload-392764
	887429ea3993d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   27f84f4f2baa4       kube-controller-manager-test-preload-392764
	
	
	==> coredns [d5bb5e4622c84af360b278475854ab8d95d312bc1eb32f3aa57d6ab8624ffd90] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:55394 - 62076 "HINFO IN 40329574713245499.6648657537392986693. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.021174854s
	
	
	==> describe nodes <==
	Name:               test-preload-392764
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-392764
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=test-preload-392764
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_20_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:20:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-392764
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:22:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:22:23 +0000   Wed, 31 Jul 2024 19:20:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:22:23 +0000   Wed, 31 Jul 2024 19:20:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:22:23 +0000   Wed, 31 Jul 2024 19:20:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:22:23 +0000   Wed, 31 Jul 2024 19:22:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    test-preload-392764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2c2953656a834248a5ba63e08153ab1a
	  System UUID:                2c295365-6a83-4248-a5ba-63e08153ab1a
	  Boot ID:                    9fe65eda-8286-43b2-8025-7b762f6061b0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-5tcw6                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                 etcd-test-preload-392764                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         99s
	  kube-system                 kube-apiserver-test-preload-392764             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-test-preload-392764    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-dwr26                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-test-preload-392764             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x4 over 107s)  kubelet          Node test-preload-392764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x4 over 107s)  kubelet          Node test-preload-392764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x4 over 107s)  kubelet          Node test-preload-392764 status is now: NodeHasSufficientPID
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet          Node test-preload-392764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet          Node test-preload-392764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet          Node test-preload-392764 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet          Node test-preload-392764 status is now: NodeReady
	  Normal  RegisteredNode           87s                  node-controller  Node test-preload-392764 event: Registered Node test-preload-392764 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-392764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-392764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-392764 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node test-preload-392764 event: Registered Node test-preload-392764 in Controller
	
	
	==> dmesg <==
	[Jul31 19:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051345] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039720] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.798457] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.520061] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.575458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.950269] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.058037] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063537] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.168230] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.139217] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.301997] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[Jul31 19:22] systemd-fstab-generator[972]: Ignoring "noauto" option for root device
	[  +0.059113] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.696825] systemd-fstab-generator[1101]: Ignoring "noauto" option for root device
	[  +5.968817] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.543549] systemd-fstab-generator[1732]: Ignoring "noauto" option for root device
	[  +6.057402] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [c0e5001f0ce5616bcbd3c5d02e630440a38f25e361637e0f3a8074eb8d3808e2] <==
	{"level":"info","ts":"2024-07-31T19:22:09.439Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"21cab5ce19ce9e1c","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T19:22:09.440Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T19:22:09.443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c switched to configuration voters=(2434958445348036124)"}
	{"level":"info","ts":"2024-07-31T19:22:09.443Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","added-peer-id":"21cab5ce19ce9e1c","added-peer-peer-urls":["https://192.168.39.166:2380"]}
	{"level":"info","ts":"2024-07-31T19:22:09.444Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:22:09.444Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:22:09.450Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T19:22:09.454Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"21cab5ce19ce9e1c","initial-advertise-peer-urls":["https://192.168.39.166:2380"],"listen-peer-urls":["https://192.168.39.166:2380"],"advertise-client-urls":["https://192.168.39.166:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.166:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T19:22:09.454Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:22:09.451Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.166:2380"}
	{"level":"info","ts":"2024-07-31T19:22:09.454Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.166:2380"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c received MsgPreVoteResp from 21cab5ce19ce9e1c at term 2"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c received MsgVoteResp from 21cab5ce19ce9e1c at term 3"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became leader at term 3"}
	{"level":"info","ts":"2024-07-31T19:22:10.568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 21cab5ce19ce9e1c elected leader 21cab5ce19ce9e1c at term 3"}
	{"level":"info","ts":"2024-07-31T19:22:10.573Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"21cab5ce19ce9e1c","local-member-attributes":"{Name:test-preload-392764 ClientURLs:[https://192.168.39.166:2379]}","request-path":"/0/members/21cab5ce19ce9e1c/attributes","cluster-id":"fb6a39d7926aa536","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:22:10.573Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:22:10.574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:22:10.574Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:22:10.574Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:22:10.575Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:22:10.575Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.166:2379"}
	
	
	==> kernel <==
	 19:22:30 up 0 min,  0 users,  load average: 0.92, 0.25, 0.08
	Linux test-preload-392764 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c54303de956ab8ca1e880b95842ee67f2c33946a6cf76b3ee77668b543faf659] <==
	I0731 19:22:13.042727       1 establishing_controller.go:76] Starting EstablishingController
	I0731 19:22:13.043054       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0731 19:22:13.043177       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 19:22:13.043280       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 19:22:13.102070       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 19:22:13.122010       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 19:22:13.133986       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0731 19:22:13.134289       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0731 19:22:13.141729       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:22:13.171275       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 19:22:13.207657       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:22:13.230018       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 19:22:13.230446       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:22:13.231004       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:22:13.238789       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 19:22:13.714212       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 19:22:14.043670       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:22:14.670273       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 19:22:14.682989       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 19:22:14.717802       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 19:22:14.733374       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:22:14.741204       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:22:15.861697       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0731 19:22:25.685765       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:22:25.754860       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [887429ea3993da981a5c5f2927acfda88a7eba1d7bdb447a61585e0e54c8158d] <==
	I0731 19:22:25.611193       1 shared_informer.go:262] Caches are synced for persistent volume
	I0731 19:22:25.617402       1 shared_informer.go:262] Caches are synced for GC
	I0731 19:22:25.630728       1 shared_informer.go:262] Caches are synced for taint
	I0731 19:22:25.630908       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0731 19:22:25.631046       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-392764. Assuming now as a timestamp.
	I0731 19:22:25.631192       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0731 19:22:25.631240       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0731 19:22:25.631842       1 event.go:294] "Event occurred" object="test-preload-392764" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-392764 event: Registered Node test-preload-392764 in Controller"
	I0731 19:22:25.640088       1 shared_informer.go:262] Caches are synced for node
	I0731 19:22:25.640204       1 range_allocator.go:173] Starting range CIDR allocator
	I0731 19:22:25.640228       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0731 19:22:25.640275       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0731 19:22:25.641335       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0731 19:22:25.652204       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 19:22:25.676554       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 19:22:25.706269       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:22:25.741843       1 shared_informer.go:262] Caches are synced for disruption
	I0731 19:22:25.741925       1 disruption.go:371] Sending events to api server.
	I0731 19:22:25.745374       1 shared_informer.go:262] Caches are synced for endpoint
	I0731 19:22:25.745434       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0731 19:22:25.751775       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 19:22:25.778516       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 19:22:26.189093       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 19:22:26.189255       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 19:22:26.193979       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [96cec53156a3f3ad4b10a531a85725996035799a3d5e82d2f43a9639115ae585] <==
	I0731 19:22:15.791454       1 node.go:163] Successfully retrieved node IP: 192.168.39.166
	I0731 19:22:15.791527       1 server_others.go:138] "Detected node IP" address="192.168.39.166"
	I0731 19:22:15.791861       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 19:22:15.847690       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 19:22:15.847722       1 server_others.go:206] "Using iptables Proxier"
	I0731 19:22:15.848507       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 19:22:15.849178       1 server.go:661] "Version info" version="v1.24.4"
	I0731 19:22:15.849206       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:22:15.850935       1 config.go:317] "Starting service config controller"
	I0731 19:22:15.851215       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 19:22:15.851259       1 config.go:226] "Starting endpoint slice config controller"
	I0731 19:22:15.851280       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 19:22:15.854361       1 config.go:444] "Starting node config controller"
	I0731 19:22:15.854434       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 19:22:15.951816       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 19:22:15.951990       1 shared_informer.go:262] Caches are synced for service config
	I0731 19:22:15.955197       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [f2951c2fa684cbad405660852fc50e2c0d1fbde416ffab574bc9f512f01e3928] <==
	I0731 19:22:09.722948       1 serving.go:348] Generated self-signed cert in-memory
	W0731 19:22:13.097177       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:22:13.100032       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:22:13.100244       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:22:13.100314       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:22:13.152421       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0731 19:22:13.152550       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:22:13.165558       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0731 19:22:13.169140       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:22:13.169284       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:22:13.169400       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:22:13.269924       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051208    1108 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fdg\" (UniqueName: \"kubernetes.io/projected/62aaf428-1af4-4c72-a16a-d5c3a468fb66-kube-api-access-r5fdg\") pod \"coredns-6d4b75cb6d-5tcw6\" (UID: \"62aaf428-1af4-4c72-a16a-d5c3a468fb66\") " pod="kube-system/coredns-6d4b75cb6d-5tcw6"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051234    1108 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/340ecf7a-4c7e-4904-afe8-1ae586d2b5fd-kube-proxy\") pod \"kube-proxy-dwr26\" (UID: \"340ecf7a-4c7e-4904-afe8-1ae586d2b5fd\") " pod="kube-system/kube-proxy-dwr26"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051252    1108 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0f818e8c-8d97-4e00-b98a-0a795c9f1e7c-tmp\") pod \"storage-provisioner\" (UID: \"0f818e8c-8d97-4e00-b98a-0a795c9f1e7c\") " pod="kube-system/storage-provisioner"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051269    1108 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/340ecf7a-4c7e-4904-afe8-1ae586d2b5fd-lib-modules\") pod \"kube-proxy-dwr26\" (UID: \"340ecf7a-4c7e-4904-afe8-1ae586d2b5fd\") " pod="kube-system/kube-proxy-dwr26"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051290    1108 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsv5s\" (UniqueName: \"kubernetes.io/projected/340ecf7a-4c7e-4904-afe8-1ae586d2b5fd-kube-api-access-rsv5s\") pod \"kube-proxy-dwr26\" (UID: \"340ecf7a-4c7e-4904-afe8-1ae586d2b5fd\") " pod="kube-system/kube-proxy-dwr26"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051314    1108 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume\") pod \"coredns-6d4b75cb6d-5tcw6\" (UID: \"62aaf428-1af4-4c72-a16a-d5c3a468fb66\") " pod="kube-system/coredns-6d4b75cb6d-5tcw6"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.051329    1108 reconciler.go:159] "Reconciler: start to sync state"
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.502077    1108 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/270f9430-2539-4466-be48-99e94995e9c7-config-volume\") pod \"270f9430-2539-4466-be48-99e94995e9c7\" (UID: \"270f9430-2539-4466-be48-99e94995e9c7\") "
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.503090    1108 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5bcw\" (UniqueName: \"kubernetes.io/projected/270f9430-2539-4466-be48-99e94995e9c7-kube-api-access-v5bcw\") pod \"270f9430-2539-4466-be48-99e94995e9c7\" (UID: \"270f9430-2539-4466-be48-99e94995e9c7\") "
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: W0731 19:22:14.504565    1108 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/270f9430-2539-4466-be48-99e94995e9c7/volumes/kubernetes.io~projected/kube-api-access-v5bcw: clearQuota called, but quotas disabled
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: W0731 19:22:14.504690    1108 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/270f9430-2539-4466-be48-99e94995e9c7/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.504966    1108 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/270f9430-2539-4466-be48-99e94995e9c7-kube-api-access-v5bcw" (OuterVolumeSpecName: "kube-api-access-v5bcw") pod "270f9430-2539-4466-be48-99e94995e9c7" (UID: "270f9430-2539-4466-be48-99e94995e9c7"). InnerVolumeSpecName "kube-api-access-v5bcw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: E0731 19:22:14.505512    1108 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: E0731 19:22:14.505688    1108 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume podName:62aaf428-1af4-4c72-a16a-d5c3a468fb66 nodeName:}" failed. No retries permitted until 2024-07-31 19:22:15.005572719 +0000 UTC m=+7.167976465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume") pod "coredns-6d4b75cb6d-5tcw6" (UID: "62aaf428-1af4-4c72-a16a-d5c3a468fb66") : object "kube-system"/"coredns" not registered
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.506633    1108 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/270f9430-2539-4466-be48-99e94995e9c7-config-volume" (OuterVolumeSpecName: "config-volume") pod "270f9430-2539-4466-be48-99e94995e9c7" (UID: "270f9430-2539-4466-be48-99e94995e9c7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.604992    1108 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/270f9430-2539-4466-be48-99e94995e9c7-config-volume\") on node \"test-preload-392764\" DevicePath \"\""
	Jul 31 19:22:14 test-preload-392764 kubelet[1108]: I0731 19:22:14.605081    1108 reconciler.go:384] "Volume detached for volume \"kube-api-access-v5bcw\" (UniqueName: \"kubernetes.io/projected/270f9430-2539-4466-be48-99e94995e9c7-kube-api-access-v5bcw\") on node \"test-preload-392764\" DevicePath \"\""
	Jul 31 19:22:15 test-preload-392764 kubelet[1108]: E0731 19:22:15.007239    1108 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 19:22:15 test-preload-392764 kubelet[1108]: E0731 19:22:15.007354    1108 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume podName:62aaf428-1af4-4c72-a16a-d5c3a468fb66 nodeName:}" failed. No retries permitted until 2024-07-31 19:22:16.007303629 +0000 UTC m=+8.169707363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume") pod "coredns-6d4b75cb6d-5tcw6" (UID: "62aaf428-1af4-4c72-a16a-d5c3a468fb66") : object "kube-system"/"coredns" not registered
	Jul 31 19:22:16 test-preload-392764 kubelet[1108]: E0731 19:22:16.016878    1108 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 19:22:16 test-preload-392764 kubelet[1108]: E0731 19:22:16.016958    1108 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume podName:62aaf428-1af4-4c72-a16a-d5c3a468fb66 nodeName:}" failed. No retries permitted until 2024-07-31 19:22:18.016943803 +0000 UTC m=+10.179347539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume") pod "coredns-6d4b75cb6d-5tcw6" (UID: "62aaf428-1af4-4c72-a16a-d5c3a468fb66") : object "kube-system"/"coredns" not registered
	Jul 31 19:22:16 test-preload-392764 kubelet[1108]: E0731 19:22:16.095338    1108 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-5tcw6" podUID=62aaf428-1af4-4c72-a16a-d5c3a468fb66
	Jul 31 19:22:16 test-preload-392764 kubelet[1108]: I0731 19:22:16.102330    1108 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=270f9430-2539-4466-be48-99e94995e9c7 path="/var/lib/kubelet/pods/270f9430-2539-4466-be48-99e94995e9c7/volumes"
	Jul 31 19:22:18 test-preload-392764 kubelet[1108]: E0731 19:22:18.031232    1108 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 19:22:18 test-preload-392764 kubelet[1108]: E0731 19:22:18.031418    1108 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume podName:62aaf428-1af4-4c72-a16a-d5c3a468fb66 nodeName:}" failed. No retries permitted until 2024-07-31 19:22:22.031363051 +0000 UTC m=+14.193766784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/62aaf428-1af4-4c72-a16a-d5c3a468fb66-config-volume") pod "coredns-6d4b75cb6d-5tcw6" (UID: "62aaf428-1af4-4c72-a16a-d5c3a468fb66") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [f687501eee2da197fc2c4afd7050c2d13be7585c408d3cbfc70b5f29c5a60c46] <==
	I0731 19:22:15.451155       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-392764 -n test-preload-392764
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-392764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-392764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-392764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-392764: (1.136340977s)
--- FAIL: TestPreload (249.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (1221.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m5.312692164s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-916231] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-916231" primary control-plane node in "kubernetes-upgrade-916231" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:25:33.910712  441565 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:25:33.910829  441565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:25:33.910837  441565 out.go:304] Setting ErrFile to fd 2...
	I0731 19:25:33.910841  441565 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:25:33.911050  441565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:25:33.911634  441565 out.go:298] Setting JSON to false
	I0731 19:25:33.912687  441565 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11277,"bootTime":1722442657,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:25:33.912755  441565 start.go:139] virtualization: kvm guest
	I0731 19:25:33.915130  441565 out.go:177] * [kubernetes-upgrade-916231] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:25:33.916541  441565 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:25:33.916533  441565 notify.go:220] Checking for updates...
	I0731 19:25:33.919372  441565 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:25:33.920675  441565 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:25:33.921978  441565 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:25:33.923415  441565 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:25:33.924909  441565 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:25:33.926685  441565 config.go:182] Loaded profile config "NoKubernetes-978325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:25:33.926799  441565 config.go:182] Loaded profile config "force-systemd-env-114834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:25:33.926909  441565 config.go:182] Loaded profile config "running-upgrade-043979": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 19:25:33.927015  441565 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:25:33.965320  441565 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:25:33.966552  441565 start.go:297] selected driver: kvm2
	I0731 19:25:33.966565  441565 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:25:33.966589  441565 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:25:33.967317  441565 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:25:33.967404  441565 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:25:33.983342  441565 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:25:33.983390  441565 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:25:33.983668  441565 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:25:33.983730  441565 cni.go:84] Creating CNI manager for ""
	I0731 19:25:33.983747  441565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:25:33.983757  441565 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:25:33.983843  441565 start.go:340] cluster config:
	{Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:25:33.983963  441565 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:25:33.985939  441565 out.go:177] * Starting "kubernetes-upgrade-916231" primary control-plane node in "kubernetes-upgrade-916231" cluster
	I0731 19:25:33.987467  441565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 19:25:33.987509  441565 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:25:33.987519  441565 cache.go:56] Caching tarball of preloaded images
	I0731 19:25:33.987612  441565 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:25:33.987624  441565 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 19:25:33.987709  441565 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/config.json ...
	I0731 19:25:33.987726  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/config.json: {Name:mk140900853a4ef2c9ee870480b60167338ae4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:25:33.987874  441565 start.go:360] acquireMachinesLock for kubernetes-upgrade-916231: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:26:06.024878  441565 start.go:364] duration metric: took 32.036963447s to acquireMachinesLock for "kubernetes-upgrade-916231"
	I0731 19:26:06.024965  441565 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:26:06.025145  441565 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:26:06.027315  441565 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 19:26:06.027587  441565 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:26:06.027655  441565 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:26:06.048140  441565 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0731 19:26:06.048640  441565 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:26:06.049256  441565 main.go:141] libmachine: Using API Version  1
	I0731 19:26:06.049287  441565 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:26:06.049661  441565 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:26:06.049885  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:26:06.050049  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:06.050187  441565 start.go:159] libmachine.API.Create for "kubernetes-upgrade-916231" (driver="kvm2")
	I0731 19:26:06.050253  441565 client.go:168] LocalClient.Create starting
	I0731 19:26:06.050293  441565 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 19:26:06.050333  441565 main.go:141] libmachine: Decoding PEM data...
	I0731 19:26:06.050356  441565 main.go:141] libmachine: Parsing certificate...
	I0731 19:26:06.050409  441565 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 19:26:06.050428  441565 main.go:141] libmachine: Decoding PEM data...
	I0731 19:26:06.050438  441565 main.go:141] libmachine: Parsing certificate...
	I0731 19:26:06.050456  441565 main.go:141] libmachine: Running pre-create checks...
	I0731 19:26:06.050469  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .PreCreateCheck
	I0731 19:26:06.050815  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetConfigRaw
	I0731 19:26:06.051188  441565 main.go:141] libmachine: Creating machine...
	I0731 19:26:06.051204  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .Create
	I0731 19:26:06.051377  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Creating KVM machine...
	I0731 19:26:06.052596  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found existing default KVM network
	I0731 19:26:06.053854  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:06.053709  442129 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:18:f5} reservation:<nil>}
	I0731 19:26:06.054827  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:06.054743  442129 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f50}
	I0731 19:26:06.054901  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | created network xml: 
	I0731 19:26:06.054921  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | <network>
	I0731 19:26:06.054933  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |   <name>mk-kubernetes-upgrade-916231</name>
	I0731 19:26:06.054946  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |   <dns enable='no'/>
	I0731 19:26:06.054956  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |   
	I0731 19:26:06.054969  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0731 19:26:06.054991  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |     <dhcp>
	I0731 19:26:06.055004  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0731 19:26:06.055017  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |     </dhcp>
	I0731 19:26:06.055036  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |   </ip>
	I0731 19:26:06.055047  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG |   
	I0731 19:26:06.055061  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | </network>
	I0731 19:26:06.055237  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | 
	I0731 19:26:06.060984  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | trying to create private KVM network mk-kubernetes-upgrade-916231 192.168.50.0/24...
	I0731 19:26:06.136920  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | private KVM network mk-kubernetes-upgrade-916231 192.168.50.0/24 created
	I0731 19:26:06.136966  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:06.136893  442129 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:26:06.136981  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231 ...
	I0731 19:26:06.136998  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 19:26:06.137079  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 19:26:06.406745  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:06.406590  442129 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa...
	I0731 19:26:06.676439  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:06.676273  442129 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/kubernetes-upgrade-916231.rawdisk...
	I0731 19:26:06.676479  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Writing magic tar header
	I0731 19:26:06.676495  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Writing SSH key tar header
	I0731 19:26:06.676506  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:06.676441  442129 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231 ...
	I0731 19:26:06.676572  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231
	I0731 19:26:06.676598  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 19:26:06.676623  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231 (perms=drwx------)
	I0731 19:26:06.676640  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:26:06.676654  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 19:26:06.676668  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:26:06.676684  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:26:06.676694  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Checking permissions on dir: /home
	I0731 19:26:06.676706  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:26:06.676722  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 19:26:06.676735  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 19:26:06.676750  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:26:06.676763  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:26:06.676772  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Skipping /home - not owner
	I0731 19:26:06.676787  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Creating domain...
	I0731 19:26:06.677907  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) define libvirt domain using xml: 
	I0731 19:26:06.677933  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) <domain type='kvm'>
	I0731 19:26:06.677951  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <name>kubernetes-upgrade-916231</name>
	I0731 19:26:06.677966  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <memory unit='MiB'>2200</memory>
	I0731 19:26:06.677979  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <vcpu>2</vcpu>
	I0731 19:26:06.677990  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <features>
	I0731 19:26:06.677999  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <acpi/>
	I0731 19:26:06.678006  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <apic/>
	I0731 19:26:06.678037  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <pae/>
	I0731 19:26:06.678052  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     
	I0731 19:26:06.678065  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   </features>
	I0731 19:26:06.678076  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <cpu mode='host-passthrough'>
	I0731 19:26:06.678087  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   
	I0731 19:26:06.678097  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   </cpu>
	I0731 19:26:06.678106  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <os>
	I0731 19:26:06.678130  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <type>hvm</type>
	I0731 19:26:06.678142  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <boot dev='cdrom'/>
	I0731 19:26:06.678186  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <boot dev='hd'/>
	I0731 19:26:06.678208  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <bootmenu enable='no'/>
	I0731 19:26:06.678219  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   </os>
	I0731 19:26:06.678230  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   <devices>
	I0731 19:26:06.678245  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <disk type='file' device='cdrom'>
	I0731 19:26:06.678261  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/boot2docker.iso'/>
	I0731 19:26:06.678277  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <target dev='hdc' bus='scsi'/>
	I0731 19:26:06.678291  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <readonly/>
	I0731 19:26:06.678307  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </disk>
	I0731 19:26:06.678318  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <disk type='file' device='disk'>
	I0731 19:26:06.678333  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:26:06.678349  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/kubernetes-upgrade-916231.rawdisk'/>
	I0731 19:26:06.678362  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <target dev='hda' bus='virtio'/>
	I0731 19:26:06.678372  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </disk>
	I0731 19:26:06.678380  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <interface type='network'>
	I0731 19:26:06.678388  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <source network='mk-kubernetes-upgrade-916231'/>
	I0731 19:26:06.678397  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <model type='virtio'/>
	I0731 19:26:06.678414  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </interface>
	I0731 19:26:06.678426  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <interface type='network'>
	I0731 19:26:06.678437  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <source network='default'/>
	I0731 19:26:06.678458  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <model type='virtio'/>
	I0731 19:26:06.678467  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </interface>
	I0731 19:26:06.678473  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <serial type='pty'>
	I0731 19:26:06.678481  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <target port='0'/>
	I0731 19:26:06.678511  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </serial>
	I0731 19:26:06.678536  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <console type='pty'>
	I0731 19:26:06.678577  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <target type='serial' port='0'/>
	I0731 19:26:06.678597  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </console>
	I0731 19:26:06.678607  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     <rng model='virtio'>
	I0731 19:26:06.678615  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)       <backend model='random'>/dev/random</backend>
	I0731 19:26:06.678623  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     </rng>
	I0731 19:26:06.678628  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     
	I0731 19:26:06.678635  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)     
	I0731 19:26:06.678643  441565 main.go:141] libmachine: (kubernetes-upgrade-916231)   </devices>
	I0731 19:26:06.678648  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) </domain>
	I0731 19:26:06.678655  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) 
	I0731 19:26:06.682962  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:8d:ac:e7 in network default
	I0731 19:26:06.683482  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Ensuring networks are active...
	I0731 19:26:06.683505  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:06.684116  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Ensuring network default is active
	I0731 19:26:06.684500  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Ensuring network mk-kubernetes-upgrade-916231 is active
	I0731 19:26:06.685034  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Getting domain xml...
	I0731 19:26:06.685718  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Creating domain...
	I0731 19:26:07.918260  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Waiting to get IP...
	I0731 19:26:07.919188  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:07.919707  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:07.919735  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:07.919655  442129 retry.go:31] will retry after 189.551669ms: waiting for machine to come up
	I0731 19:26:08.111043  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:08.111569  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:08.111601  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:08.111504  442129 retry.go:31] will retry after 286.255876ms: waiting for machine to come up
	I0731 19:26:08.399123  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:08.399548  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:08.399576  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:08.399509  442129 retry.go:31] will retry after 330.84895ms: waiting for machine to come up
	I0731 19:26:08.732168  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:08.732650  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:08.732678  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:08.732601  442129 retry.go:31] will retry after 597.403942ms: waiting for machine to come up
	I0731 19:26:09.331092  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:09.331519  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:09.331546  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:09.331457  442129 retry.go:31] will retry after 479.103896ms: waiting for machine to come up
	I0731 19:26:09.812008  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:09.812479  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:09.812510  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:09.812430  442129 retry.go:31] will retry after 758.358292ms: waiting for machine to come up
	I0731 19:26:10.572503  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:10.573057  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:10.573085  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:10.572992  442129 retry.go:31] will retry after 857.946601ms: waiting for machine to come up
	I0731 19:26:11.433262  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:11.433724  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:11.433754  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:11.433677  442129 retry.go:31] will retry after 1.355961767s: waiting for machine to come up
	I0731 19:26:12.791304  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:12.791723  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:12.791743  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:12.791684  442129 retry.go:31] will retry after 1.848471522s: waiting for machine to come up
	I0731 19:26:14.641796  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:14.642352  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:14.642387  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:14.642277  442129 retry.go:31] will retry after 1.862455961s: waiting for machine to come up
	I0731 19:26:16.506758  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:16.507247  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:16.507279  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:16.507191  442129 retry.go:31] will retry after 2.319463797s: waiting for machine to come up
	I0731 19:26:18.829638  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:18.830047  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:18.830103  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:18.830010  442129 retry.go:31] will retry after 3.003017816s: waiting for machine to come up
	I0731 19:26:21.834735  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:21.835306  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:21.835339  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:21.835246  442129 retry.go:31] will retry after 3.988642346s: waiting for machine to come up
	I0731 19:26:25.825577  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:25.826098  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find current IP address of domain kubernetes-upgrade-916231 in network mk-kubernetes-upgrade-916231
	I0731 19:26:25.826119  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | I0731 19:26:25.826071  442129 retry.go:31] will retry after 4.562821973s: waiting for machine to come up
	I0731 19:26:30.393411  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.393929  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Found IP for machine: 192.168.50.208
	I0731 19:26:30.393959  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has current primary IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.393966  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Reserving static IP address...
	I0731 19:26:30.394358  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-916231", mac: "52:54:00:1a:c3:a5", ip: "192.168.50.208"} in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.470835  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Getting to WaitForSSH function...
	I0731 19:26:30.470879  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Reserved static IP address: 192.168.50.208
	I0731 19:26:30.470894  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Waiting for SSH to be available...
	I0731 19:26:30.473580  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.473998  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:minikube Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:30.474032  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.474163  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Using SSH client type: external
	I0731 19:26:30.474191  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa (-rw-------)
	I0731 19:26:30.474238  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:26:30.474256  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | About to run SSH command:
	I0731 19:26:30.474273  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | exit 0
	I0731 19:26:30.609506  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | SSH cmd err, output: <nil>: 
	I0731 19:26:30.609803  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) KVM machine creation complete!
	I0731 19:26:30.610198  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetConfigRaw
	I0731 19:26:30.610887  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:30.611127  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:30.611345  441565 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:26:30.611367  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetState
	I0731 19:26:30.612861  441565 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:26:30.612883  441565 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:26:30.612892  441565 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:26:30.612900  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:30.615793  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.616292  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:30.616323  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.616478  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:30.616681  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.616895  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.617058  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:30.617276  441565 main.go:141] libmachine: Using SSH client type: native
	I0731 19:26:30.617513  441565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:26:30.617525  441565 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:26:30.739860  441565 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:26:30.739906  441565 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:26:30.739915  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:30.742809  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.743193  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:30.743222  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.743376  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:30.743561  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.743726  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.743845  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:30.744031  441565 main.go:141] libmachine: Using SSH client type: native
	I0731 19:26:30.744310  441565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:26:30.744329  441565 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:26:30.857763  441565 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:26:30.857886  441565 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:26:30.857904  441565 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:26:30.857923  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:26:30.858206  441565 buildroot.go:166] provisioning hostname "kubernetes-upgrade-916231"
	I0731 19:26:30.858238  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:26:30.858555  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:30.861597  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.862036  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:30.862070  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.862228  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:30.862441  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.862650  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.862810  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:30.863001  441565 main.go:141] libmachine: Using SSH client type: native
	I0731 19:26:30.863208  441565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:26:30.863225  441565 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-916231 && echo "kubernetes-upgrade-916231" | sudo tee /etc/hostname
	I0731 19:26:30.995417  441565 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-916231
	
	I0731 19:26:30.995467  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:30.998585  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.999037  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:30.999063  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:30.999261  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:30.999480  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.999647  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:30.999813  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.000059  441565 main.go:141] libmachine: Using SSH client type: native
	I0731 19:26:31.000256  441565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:26:31.000275  441565 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-916231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-916231/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-916231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:26:31.128038  441565 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:26:31.128086  441565 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:26:31.128118  441565 buildroot.go:174] setting up certificates
	I0731 19:26:31.128131  441565 provision.go:84] configureAuth start
	I0731 19:26:31.128148  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:26:31.128470  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:26:31.131662  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.132020  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.132065  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.132231  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.134746  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.135133  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.135160  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.135326  441565 provision.go:143] copyHostCerts
	I0731 19:26:31.135390  441565 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:26:31.135403  441565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:26:31.135471  441565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:26:31.135619  441565 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:26:31.135632  441565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:26:31.135666  441565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:26:31.135760  441565 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:26:31.135769  441565 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:26:31.135798  441565 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:26:31.135893  441565 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-916231 san=[127.0.0.1 192.168.50.208 kubernetes-upgrade-916231 localhost minikube]
	I0731 19:26:31.223851  441565 provision.go:177] copyRemoteCerts
	I0731 19:26:31.223927  441565 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:26:31.223966  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.227006  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.227382  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.227413  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.227609  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:31.227865  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.228060  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.228266  441565 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:26:31.315443  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:26:31.349505  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:26:31.375931  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 19:26:31.407512  441565 provision.go:87] duration metric: took 279.365438ms to configureAuth
	I0731 19:26:31.407544  441565 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:26:31.407709  441565 config.go:182] Loaded profile config "kubernetes-upgrade-916231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 19:26:31.407786  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.410979  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.411445  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.411487  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.411620  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:31.411876  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.412072  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.412245  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.412449  441565 main.go:141] libmachine: Using SSH client type: native
	I0731 19:26:31.412665  441565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:26:31.412682  441565 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:26:31.706693  441565 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:26:31.706725  441565 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:26:31.706737  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetURL
	I0731 19:26:31.708071  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | Using libvirt version 6000000
	I0731 19:26:31.710310  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.710694  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.710738  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.710936  441565 main.go:141] libmachine: Docker is up and running!
	I0731 19:26:31.710962  441565 main.go:141] libmachine: Reticulating splines...
	I0731 19:26:31.710970  441565 client.go:171] duration metric: took 25.660706498s to LocalClient.Create
	I0731 19:26:31.710999  441565 start.go:167] duration metric: took 25.660812293s to libmachine.API.Create "kubernetes-upgrade-916231"
	I0731 19:26:31.711013  441565 start.go:293] postStartSetup for "kubernetes-upgrade-916231" (driver="kvm2")
	I0731 19:26:31.711027  441565 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:26:31.711051  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:31.711312  441565 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:26:31.711343  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.713589  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.714024  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.714059  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.714246  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:31.714426  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.714562  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.714734  441565 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:26:31.808405  441565 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:26:31.812849  441565 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:26:31.812874  441565 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:26:31.812946  441565 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:26:31.813039  441565 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:26:31.813150  441565 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:26:31.823064  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:26:31.847141  441565 start.go:296] duration metric: took 136.113699ms for postStartSetup
	I0731 19:26:31.847201  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetConfigRaw
	I0731 19:26:31.847844  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:26:31.850693  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.851078  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.851109  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.851418  441565 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/config.json ...
	I0731 19:26:31.851611  441565 start.go:128] duration metric: took 25.826454249s to createHost
	I0731 19:26:31.851639  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.854156  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.854512  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.854539  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.854630  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:31.854831  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.855001  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.855121  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.855265  441565 main.go:141] libmachine: Using SSH client type: native
	I0731 19:26:31.855456  441565 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:26:31.855469  441565 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 19:26:31.969314  441565 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722453991.925264011
	
	I0731 19:26:31.969342  441565 fix.go:216] guest clock: 1722453991.925264011
	I0731 19:26:31.969353  441565 fix.go:229] Guest: 2024-07-31 19:26:31.925264011 +0000 UTC Remote: 2024-07-31 19:26:31.851623415 +0000 UTC m=+57.977428702 (delta=73.640596ms)
	I0731 19:26:31.969381  441565 fix.go:200] guest clock delta is within tolerance: 73.640596ms
	I0731 19:26:31.969388  441565 start.go:83] releasing machines lock for "kubernetes-upgrade-916231", held for 25.944463766s
	I0731 19:26:31.969419  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:31.969699  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:26:31.972837  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.973270  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.973293  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.973459  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:31.974148  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:31.974341  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:26:31.974449  441565 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:26:31.974489  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.974603  441565 ssh_runner.go:195] Run: cat /version.json
	I0731 19:26:31.974632  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:26:31.977545  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.977814  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.977956  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.977986  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.978158  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:31.978292  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:31.978351  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.978419  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:31.978452  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:26:31.978974  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:26:31.978998  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.979175  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:26:31.979167  441565 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:26:31.979326  441565 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:26:32.076676  441565 ssh_runner.go:195] Run: systemctl --version
	I0731 19:26:32.104733  441565 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:26:32.284401  441565 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:26:32.293137  441565 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:26:32.293235  441565 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:26:32.319489  441565 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:26:32.319524  441565 start.go:495] detecting cgroup driver to use...
	I0731 19:26:32.319602  441565 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:26:32.344513  441565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:26:32.366203  441565 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:26:32.366285  441565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:26:32.386303  441565 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:26:32.405892  441565 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:26:32.533515  441565 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:26:32.685971  441565 docker.go:233] disabling docker service ...
	I0731 19:26:32.686060  441565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:26:32.704626  441565 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:26:32.720922  441565 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:26:32.881800  441565 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:26:33.016977  441565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:26:33.034433  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:26:33.056961  441565 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 19:26:33.057044  441565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:26:33.070071  441565 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:26:33.070156  441565 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:26:33.083919  441565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:26:33.095662  441565 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:26:33.111147  441565 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:26:33.126413  441565 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:26:33.139447  441565 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:26:33.139542  441565 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:26:33.158992  441565 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:26:33.173260  441565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:26:33.324101  441565 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:26:33.500151  441565 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:26:33.500237  441565 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:26:33.505857  441565 start.go:563] Will wait 60s for crictl version
	I0731 19:26:33.505917  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:33.509921  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:26:33.558916  441565 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:26:33.559013  441565 ssh_runner.go:195] Run: crio --version
	I0731 19:26:33.592674  441565 ssh_runner.go:195] Run: crio --version
	I0731 19:26:33.627280  441565 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 19:26:33.628537  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:26:33.631350  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:33.631682  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:26:21 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:26:33.631707  441565 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:26:33.631931  441565 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 19:26:33.636525  441565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:26:33.650501  441565 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:26:33.650640  441565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 19:26:33.650705  441565 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:26:33.692469  441565 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 19:26:33.692546  441565 ssh_runner.go:195] Run: which lz4
	I0731 19:26:33.696915  441565 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 19:26:33.701300  441565 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:26:33.701333  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 19:26:35.431982  441565 crio.go:462] duration metric: took 1.735086545s to copy over tarball
	I0731 19:26:35.432074  441565 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:26:38.055056  441565 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.622942427s)
	I0731 19:26:38.055117  441565 crio.go:469] duration metric: took 2.623079144s to extract the tarball
	I0731 19:26:38.055134  441565 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:26:38.099455  441565 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:26:38.147912  441565 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 19:26:38.147942  441565 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 19:26:38.148007  441565 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:26:38.148045  441565 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 19:26:38.148044  441565 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 19:26:38.148091  441565 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0731 19:26:38.148087  441565 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 19:26:38.148121  441565 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 19:26:38.148096  441565 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 19:26:38.148348  441565 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 19:26:38.149848  441565 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 19:26:38.149867  441565 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 19:26:38.149879  441565 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 19:26:38.149911  441565 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 19:26:38.149848  441565 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 19:26:38.149850  441565 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 19:26:38.149850  441565 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:26:38.149931  441565 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 19:26:38.319666  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 19:26:38.369635  441565 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 19:26:38.369686  441565 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 19:26:38.369750  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.374444  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 19:26:38.376230  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 19:26:38.384664  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 19:26:38.423876  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 19:26:38.442944  441565 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 19:26:38.443000  441565 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 19:26:38.443057  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.457159  441565 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 19:26:38.457247  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 19:26:38.457276  441565 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 19:26:38.457319  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.502561  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 19:26:38.502676  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 19:26:38.509164  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 19:26:38.509185  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 19:26:38.518014  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 19:26:38.546021  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 19:26:38.599165  441565 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 19:26:38.599190  441565 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 19:26:38.599220  441565 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 19:26:38.599226  441565 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 19:26:38.599263  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.599272  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.613624  441565 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 19:26:38.613681  441565 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 19:26:38.613684  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 19:26:38.613722  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.613807  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 19:26:38.631178  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 19:26:38.669268  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 19:26:38.669321  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 19:26:38.669325  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 19:26:38.704133  441565 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 19:26:38.704178  441565 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 19:26:38.704228  441565 ssh_runner.go:195] Run: which crictl
	I0731 19:26:38.717316  441565 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 19:26:38.717389  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 19:26:38.756534  441565 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 19:26:39.132582  441565 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 19:26:39.274265  441565 cache_images.go:92] duration metric: took 1.126302644s to LoadCachedImages
	W0731 19:26:39.274372  441565 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19356-395032/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 19:26:39.274391  441565 kubeadm.go:934] updating node { 192.168.50.208 8443 v1.20.0 crio true true} ...
	I0731 19:26:39.274544  441565 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-916231 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:26:39.274643  441565 ssh_runner.go:195] Run: crio config
	I0731 19:26:39.328162  441565 cni.go:84] Creating CNI manager for ""
	I0731 19:26:39.328183  441565 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:26:39.328191  441565 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:26:39.328209  441565 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.208 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-916231 NodeName:kubernetes-upgrade-916231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 19:26:39.328332  441565 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-916231"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:26:39.328417  441565 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 19:26:39.341204  441565 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:26:39.341274  441565 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:26:39.351395  441565 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0731 19:26:39.369542  441565 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:26:39.387733  441565 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0731 19:26:39.406027  441565 ssh_runner.go:195] Run: grep 192.168.50.208	control-plane.minikube.internal$ /etc/hosts
	I0731 19:26:39.410281  441565 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:26:39.423847  441565 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:26:39.553470  441565 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:26:39.571277  441565 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231 for IP: 192.168.50.208
	I0731 19:26:39.571317  441565 certs.go:194] generating shared ca certs ...
	I0731 19:26:39.571341  441565 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:39.571538  441565 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:26:39.571588  441565 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:26:39.571599  441565 certs.go:256] generating profile certs ...
	I0731 19:26:39.571660  441565 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.key
	I0731 19:26:39.571675  441565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.crt with IP's: []
	I0731 19:26:39.874874  441565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.crt ...
	I0731 19:26:39.874913  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.crt: {Name:mkd7830af058b9d70773e302e97b264a5e69f752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:39.875169  441565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.key ...
	I0731 19:26:39.875202  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.key: {Name:mk1737dc262a05e92a5b50af44d798715ecc7f97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:39.875325  441565 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key.7a7f958c
	I0731 19:26:39.875346  441565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt.7a7f958c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.208]
	I0731 19:26:40.067767  441565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt.7a7f958c ...
	I0731 19:26:40.067808  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt.7a7f958c: {Name:mk8a9ff292ab9614587e2452c2fec517777a4612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:40.067994  441565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key.7a7f958c ...
	I0731 19:26:40.068013  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key.7a7f958c: {Name:mk3ac0d476d6fae8529620b6645d3c60444562c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:40.068109  441565 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt.7a7f958c -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt
	I0731 19:26:40.068214  441565 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key.7a7f958c -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key
	I0731 19:26:40.068303  441565 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.key
	I0731 19:26:40.068326  441565 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.crt with IP's: []
	I0731 19:26:40.233623  441565 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.crt ...
	I0731 19:26:40.233659  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.crt: {Name:mk3f6a2ccf886888846a6d56ab0524836e36ed09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:40.233834  441565 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.key ...
	I0731 19:26:40.233847  441565 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.key: {Name:mkf0a4def7857d091b6bcc8057d72b6c3e34e760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:26:40.234009  441565 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:26:40.234048  441565 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:26:40.234059  441565 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:26:40.234093  441565 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:26:40.234134  441565 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:26:40.234163  441565 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:26:40.234209  441565 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:26:40.234813  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:26:40.269280  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:26:40.301690  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:26:40.333069  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:26:40.358430  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 19:26:40.386081  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:26:40.412549  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:26:40.442280  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 19:26:40.471545  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:26:40.495602  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:26:40.522185  441565 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:26:40.552496  441565 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:26:40.575723  441565 ssh_runner.go:195] Run: openssl version
	I0731 19:26:40.583902  441565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:26:40.598396  441565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:26:40.604863  441565 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:26:40.604944  441565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:26:40.611294  441565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:26:40.625580  441565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:26:40.637464  441565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:26:40.642456  441565 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:26:40.642526  441565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:26:40.648981  441565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:26:40.665655  441565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:26:40.680955  441565 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:26:40.686222  441565 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:26:40.686311  441565 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:26:40.692477  441565 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:26:40.709646  441565 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:26:40.714210  441565 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:26:40.714295  441565 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:26:40.714393  441565 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:26:40.714465  441565 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:26:40.757966  441565 cri.go:89] found id: ""
	I0731 19:26:40.758075  441565 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:26:40.768354  441565 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:26:40.778381  441565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:26:40.792943  441565 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:26:40.792964  441565 kubeadm.go:157] found existing configuration files:
	
	I0731 19:26:40.793025  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:26:40.803136  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:26:40.803217  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:26:40.813986  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:26:40.827264  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:26:40.827339  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:26:40.842162  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:26:40.855930  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:26:40.856011  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:26:40.869467  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:26:40.879430  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:26:40.879507  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:26:40.890334  441565 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:26:41.051581  441565 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 19:26:41.051669  441565 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:26:41.239687  441565 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:26:41.239841  441565 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:26:41.239972  441565 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:26:41.462144  441565 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:26:41.560816  441565 out.go:204]   - Generating certificates and keys ...
	I0731 19:26:41.560949  441565 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:26:41.561042  441565 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:26:41.573051  441565 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 19:26:41.724298  441565 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 19:26:41.848146  441565 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 19:26:42.178751  441565 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 19:26:42.226756  441565 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 19:26:42.227183  441565 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-916231 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	I0731 19:26:42.340585  441565 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 19:26:42.340898  441565 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-916231 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	I0731 19:26:42.439917  441565 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 19:26:42.510432  441565 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 19:26:42.745795  441565 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 19:26:42.746192  441565 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:26:42.863785  441565 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:26:43.069639  441565 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:26:43.322043  441565 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:26:43.929212  441565 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:26:43.954643  441565 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 19:26:43.954778  441565 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 19:26:43.954882  441565 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 19:26:44.108247  441565 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 19:26:44.109587  441565 out.go:204]   - Booting up control plane ...
	I0731 19:26:44.109736  441565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 19:26:44.136057  441565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 19:26:44.140583  441565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 19:26:44.141738  441565 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:26:44.151442  441565 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 19:27:24.114980  441565 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 19:27:24.115554  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:27:24.115818  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:27:29.114697  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:27:29.114919  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:27:39.113680  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:27:39.113989  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:27:59.113976  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:27:59.114278  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:28:39.112574  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:28:39.112874  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:28:39.112895  441565 kubeadm.go:310] 
	I0731 19:28:39.112929  441565 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 19:28:39.112962  441565 kubeadm.go:310] 		timed out waiting for the condition
	I0731 19:28:39.112969  441565 kubeadm.go:310] 
	I0731 19:28:39.113002  441565 kubeadm.go:310] 	This error is likely caused by:
	I0731 19:28:39.113031  441565 kubeadm.go:310] 		- The kubelet is not running
	I0731 19:28:39.113164  441565 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 19:28:39.113176  441565 kubeadm.go:310] 
	I0731 19:28:39.113311  441565 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 19:28:39.113367  441565 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 19:28:39.113425  441565 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 19:28:39.113434  441565 kubeadm.go:310] 
	I0731 19:28:39.113551  441565 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 19:28:39.113653  441565 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 19:28:39.113684  441565 kubeadm.go:310] 
	I0731 19:28:39.113824  441565 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 19:28:39.113954  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 19:28:39.114061  441565 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 19:28:39.114166  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 19:28:39.114174  441565 kubeadm.go:310] 
	I0731 19:28:39.114678  441565 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:28:39.114818  441565 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 19:28:39.114913  441565 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 19:28:39.115089  441565 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-916231 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-916231 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-916231 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-916231 localhost] and IPs [192.168.50.208 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 19:28:39.115161  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 19:28:41.060490  441565 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.94528948s)
	I0731 19:28:41.060586  441565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:28:41.075238  441565 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:28:41.085717  441565 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:28:41.085744  441565 kubeadm.go:157] found existing configuration files:
	
	I0731 19:28:41.085801  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:28:41.095679  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:28:41.095756  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:28:41.105989  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:28:41.115233  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:28:41.115296  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:28:41.125156  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:28:41.134712  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:28:41.134804  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:28:41.144442  441565 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:28:41.153548  441565 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:28:41.153603  441565 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:28:41.165083  441565 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:28:41.235402  441565 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 19:28:41.235505  441565 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:28:41.377170  441565 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:28:41.377278  441565 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:28:41.377365  441565 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:28:41.595272  441565 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:28:41.597399  441565 out.go:204]   - Generating certificates and keys ...
	I0731 19:28:41.597505  441565 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:28:41.597602  441565 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:28:41.597733  441565 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 19:28:41.597832  441565 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 19:28:41.597936  441565 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 19:28:41.598023  441565 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 19:28:41.598094  441565 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 19:28:41.598162  441565 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 19:28:41.598277  441565 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 19:28:41.598372  441565 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 19:28:41.598428  441565 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 19:28:41.598519  441565 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:28:41.814668  441565 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:28:42.011852  441565 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:28:42.278113  441565 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:28:42.429178  441565 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:28:42.444541  441565 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 19:28:42.447186  441565 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 19:28:42.447404  441565 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 19:28:42.589240  441565 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 19:28:42.591427  441565 out.go:204]   - Booting up control plane ...
	I0731 19:28:42.591556  441565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 19:28:42.604240  441565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 19:28:42.605650  441565 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 19:28:42.606661  441565 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:28:42.609545  441565 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 19:29:22.611761  441565 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 19:29:22.611973  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:29:22.612227  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:29:27.612955  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:29:27.613193  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:29:37.613957  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:29:37.614237  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:29:57.615862  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:29:57.616093  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:30:37.616231  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:30:37.616541  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:30:37.616565  441565 kubeadm.go:310] 
	I0731 19:30:37.616618  441565 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 19:30:37.616682  441565 kubeadm.go:310] 		timed out waiting for the condition
	I0731 19:30:37.616691  441565 kubeadm.go:310] 
	I0731 19:30:37.616732  441565 kubeadm.go:310] 	This error is likely caused by:
	I0731 19:30:37.616774  441565 kubeadm.go:310] 		- The kubelet is not running
	I0731 19:30:37.616907  441565 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 19:30:37.616921  441565 kubeadm.go:310] 
	I0731 19:30:37.617009  441565 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 19:30:37.617054  441565 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 19:30:37.617101  441565 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 19:30:37.617111  441565 kubeadm.go:310] 
	I0731 19:30:37.617237  441565 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 19:30:37.617340  441565 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 19:30:37.617347  441565 kubeadm.go:310] 
	I0731 19:30:37.617480  441565 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 19:30:37.617592  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 19:30:37.617688  441565 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 19:30:37.617779  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 19:30:37.617786  441565 kubeadm.go:310] 
	I0731 19:30:37.618674  441565 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:30:37.618783  441565 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 19:30:37.618878  441565 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 19:30:37.618954  441565 kubeadm.go:394] duration metric: took 3m56.904666471s to StartCluster
	I0731 19:30:37.619032  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 19:30:37.619098  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 19:30:37.673896  441565 cri.go:89] found id: ""
	I0731 19:30:37.673924  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.673934  441565 logs.go:278] No container was found matching "kube-apiserver"
	I0731 19:30:37.673942  441565 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 19:30:37.674013  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 19:30:37.722228  441565 cri.go:89] found id: ""
	I0731 19:30:37.722267  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.722279  441565 logs.go:278] No container was found matching "etcd"
	I0731 19:30:37.722291  441565 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 19:30:37.722363  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 19:30:37.773267  441565 cri.go:89] found id: ""
	I0731 19:30:37.773296  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.773307  441565 logs.go:278] No container was found matching "coredns"
	I0731 19:30:37.773314  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 19:30:37.773381  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 19:30:37.813679  441565 cri.go:89] found id: ""
	I0731 19:30:37.813716  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.813728  441565 logs.go:278] No container was found matching "kube-scheduler"
	I0731 19:30:37.813737  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 19:30:37.813804  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 19:30:37.850740  441565 cri.go:89] found id: ""
	I0731 19:30:37.850769  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.850778  441565 logs.go:278] No container was found matching "kube-proxy"
	I0731 19:30:37.850785  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 19:30:37.850839  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 19:30:37.891443  441565 cri.go:89] found id: ""
	I0731 19:30:37.891474  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.891484  441565 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 19:30:37.891491  441565 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 19:30:37.891558  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 19:30:37.932204  441565 cri.go:89] found id: ""
	I0731 19:30:37.932248  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.932261  441565 logs.go:278] No container was found matching "kindnet"
	I0731 19:30:37.932277  441565 logs.go:123] Gathering logs for kubelet ...
	I0731 19:30:37.932296  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 19:30:37.992472  441565 logs.go:123] Gathering logs for dmesg ...
	I0731 19:30:37.992512  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 19:30:38.008005  441565 logs.go:123] Gathering logs for describe nodes ...
	I0731 19:30:38.008043  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 19:30:38.155717  441565 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 19:30:38.155747  441565 logs.go:123] Gathering logs for CRI-O ...
	I0731 19:30:38.155764  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 19:30:38.269491  441565 logs.go:123] Gathering logs for container status ...
	I0731 19:30:38.269537  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 19:30:38.320851  441565 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 19:30:38.320918  441565 out.go:239] * 
	* 
	W0731 19:30:38.320985  441565 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:30:38.321016  441565 out.go:239] * 
	* 
	W0731 19:30:38.322107  441565 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 19:30:38.562096  441565 out.go:177] 
	W0731 19:30:38.759839  441565 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:30:38.759906  441565 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 19:30:38.759946  441565 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 19:30:38.939936  441565 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-916231
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-916231: (6.885365316s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-916231 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-916231 status --format={{.Host}}: exit status 7 (67.933227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.895407669s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-916231 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (120.58653ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-916231] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-916231
	    minikube start -p kubernetes-upgrade-916231 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9162312 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-916231 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (14m6.547809686s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-916231] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-916231" primary control-plane node in "kubernetes-upgrade-916231" cluster
	* Updating the running kvm2 "kubernetes-upgrade-916231" VM ...
	* Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:31:46.329235  446963 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:31:46.329379  446963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:31:46.329391  446963 out.go:304] Setting ErrFile to fd 2...
	I0731 19:31:46.329399  446963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:31:46.329590  446963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:31:46.330144  446963 out.go:298] Setting JSON to false
	I0731 19:31:46.331184  446963 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11649,"bootTime":1722442657,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:31:46.331250  446963 start.go:139] virtualization: kvm guest
	I0731 19:31:46.333492  446963 out.go:177] * [kubernetes-upgrade-916231] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:31:46.335067  446963 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:31:46.335134  446963 notify.go:220] Checking for updates...
	I0731 19:31:46.337638  446963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:31:46.338971  446963 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:31:46.340483  446963 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:31:46.341790  446963 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:31:46.343076  446963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:31:46.345072  446963 config.go:182] Loaded profile config "kubernetes-upgrade-916231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 19:31:46.345662  446963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:31:46.345753  446963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:31:46.367611  446963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40935
	I0731 19:31:46.368070  446963 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:31:46.368832  446963 main.go:141] libmachine: Using API Version  1
	I0731 19:31:46.368857  446963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:31:46.369305  446963 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:31:46.369514  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:31:46.369833  446963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:31:46.370316  446963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:31:46.370368  446963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:31:46.385928  446963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37647
	I0731 19:31:46.386405  446963 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:31:46.386902  446963 main.go:141] libmachine: Using API Version  1
	I0731 19:31:46.386928  446963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:31:46.387285  446963 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:31:46.387482  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:31:46.428252  446963 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:31:46.429399  446963 start.go:297] selected driver: kvm2
	I0731 19:31:46.429416  446963 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:31:46.429539  446963 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:31:46.430525  446963 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:31:46.430606  446963 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:31:46.447709  446963 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:31:46.448289  446963 cni.go:84] Creating CNI manager for ""
	I0731 19:31:46.448316  446963 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:31:46.448390  446963 start.go:340] cluster config:
	{Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-916231 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:31:46.448534  446963 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:31:46.450482  446963 out.go:177] * Starting "kubernetes-upgrade-916231" primary control-plane node in "kubernetes-upgrade-916231" cluster
	I0731 19:31:46.451813  446963 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 19:31:46.451862  446963 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:31:46.451873  446963 cache.go:56] Caching tarball of preloaded images
	I0731 19:31:46.451966  446963 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:31:46.451976  446963 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 19:31:46.452059  446963 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/config.json ...
	I0731 19:31:46.452258  446963 start.go:360] acquireMachinesLock for kubernetes-upgrade-916231: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:31:53.845501  446963 start.go:364] duration metric: took 7.393213943s to acquireMachinesLock for "kubernetes-upgrade-916231"
	I0731 19:31:53.845563  446963 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:31:53.845571  446963 fix.go:54] fixHost starting: 
	I0731 19:31:53.846008  446963 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:31:53.846085  446963 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:31:53.863796  446963 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33355
	I0731 19:31:53.864231  446963 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:31:53.864797  446963 main.go:141] libmachine: Using API Version  1
	I0731 19:31:53.864827  446963 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:31:53.865180  446963 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:31:53.865450  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:31:53.865608  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetState
	I0731 19:31:53.867315  446963 fix.go:112] recreateIfNeeded on kubernetes-upgrade-916231: state=Running err=<nil>
	W0731 19:31:53.867342  446963 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:31:53.869521  446963 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-916231" VM ...
	I0731 19:31:53.870887  446963 machine.go:94] provisionDockerMachine start ...
	I0731 19:31:53.870919  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:31:53.871164  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:31:53.874269  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:53.874691  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:53.874723  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:53.874861  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:31:53.875029  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:53.875213  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:53.875363  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:31:53.875534  446963 main.go:141] libmachine: Using SSH client type: native
	I0731 19:31:53.875719  446963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:31:53.875731  446963 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:31:53.992976  446963 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-916231
	
	I0731 19:31:53.993009  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:31:53.993302  446963 buildroot.go:166] provisioning hostname "kubernetes-upgrade-916231"
	I0731 19:31:53.993338  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:31:53.993624  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:31:53.996646  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:53.997042  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:53.997073  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:53.997238  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:31:53.997433  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:53.997588  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:53.997761  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:31:53.997944  446963 main.go:141] libmachine: Using SSH client type: native
	I0731 19:31:53.998229  446963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:31:53.998253  446963 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-916231 && echo "kubernetes-upgrade-916231" | sudo tee /etc/hostname
	I0731 19:31:54.129128  446963 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-916231
	
	I0731 19:31:54.129161  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:31:54.132623  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.133032  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:54.133065  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.133334  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:31:54.133488  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:54.133609  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:54.133736  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:31:54.134010  446963 main.go:141] libmachine: Using SSH client type: native
	I0731 19:31:54.134226  446963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:31:54.134244  446963 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-916231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-916231/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-916231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:31:54.257756  446963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:31:54.257817  446963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:31:54.257866  446963 buildroot.go:174] setting up certificates
	I0731 19:31:54.257879  446963 provision.go:84] configureAuth start
	I0731 19:31:54.257895  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetMachineName
	I0731 19:31:54.258267  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:31:54.261532  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.261958  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:54.262012  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.262175  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:31:54.264644  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.265111  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:54.265139  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.265317  446963 provision.go:143] copyHostCerts
	I0731 19:31:54.265392  446963 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:31:54.265406  446963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:31:54.265470  446963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:31:54.265594  446963 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:31:54.265606  446963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:31:54.265634  446963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:31:54.265739  446963 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:31:54.265750  446963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:31:54.265782  446963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:31:54.265860  446963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-916231 san=[127.0.0.1 192.168.50.208 kubernetes-upgrade-916231 localhost minikube]
	I0731 19:31:54.465537  446963 provision.go:177] copyRemoteCerts
	I0731 19:31:54.465601  446963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:31:54.465630  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:31:54.468668  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.469310  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:54.469346  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.469554  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:31:54.469828  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:54.470044  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:31:54.470248  446963 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:31:54.564924  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:31:54.600586  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 19:31:54.627791  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 19:31:54.655618  446963 provision.go:87] duration metric: took 397.725273ms to configureAuth
	I0731 19:31:54.655650  446963 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:31:54.655873  446963 config.go:182] Loaded profile config "kubernetes-upgrade-916231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 19:31:54.655975  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:31:54.658809  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.659246  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:31:54.659293  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:31:54.659488  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:31:54.659698  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:54.659898  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:31:54.660091  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:31:54.660300  446963 main.go:141] libmachine: Using SSH client type: native
	I0731 19:31:54.660538  446963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:31:54.660567  446963 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:32:03.257316  446963 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:32:03.257348  446963 machine.go:97] duration metric: took 9.38644059s to provisionDockerMachine
	I0731 19:32:03.257363  446963 start.go:293] postStartSetup for "kubernetes-upgrade-916231" (driver="kvm2")
	I0731 19:32:03.257380  446963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:32:03.257402  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:32:03.257855  446963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:32:03.257902  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:32:03.261051  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.261437  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:32:03.261465  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.261669  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:32:03.261884  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:32:03.262066  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:32:03.262183  446963 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:32:03.358248  446963 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:32:03.364102  446963 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:32:03.364132  446963 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:32:03.364196  446963 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:32:03.364299  446963 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:32:03.364447  446963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:32:03.380901  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:32:03.412392  446963 start.go:296] duration metric: took 154.997358ms for postStartSetup
	I0731 19:32:03.412434  446963 fix.go:56] duration metric: took 9.566863242s for fixHost
	I0731 19:32:03.412458  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:32:03.414943  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.415221  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:32:03.415254  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.415573  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:32:03.415772  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:32:03.415969  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:32:03.416121  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:32:03.416237  446963 main.go:141] libmachine: Using SSH client type: native
	I0731 19:32:03.416445  446963 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.208 22 <nil> <nil>}
	I0731 19:32:03.416460  446963 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 19:32:03.538634  446963 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454323.528541856
	
	I0731 19:32:03.538666  446963 fix.go:216] guest clock: 1722454323.528541856
	I0731 19:32:03.538676  446963 fix.go:229] Guest: 2024-07-31 19:32:03.528541856 +0000 UTC Remote: 2024-07-31 19:32:03.412439796 +0000 UTC m=+17.125136022 (delta=116.10206ms)
	I0731 19:32:03.538730  446963 fix.go:200] guest clock delta is within tolerance: 116.10206ms
	I0731 19:32:03.538739  446963 start.go:83] releasing machines lock for "kubernetes-upgrade-916231", held for 9.69319912s
	I0731 19:32:03.538776  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:32:03.539149  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:32:03.542506  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.542944  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:32:03.542970  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.543261  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:32:03.543924  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:32:03.544173  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .DriverName
	I0731 19:32:03.544286  446963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:32:03.544338  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:32:03.544438  446963 ssh_runner.go:195] Run: cat /version.json
	I0731 19:32:03.544465  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHHostname
	I0731 19:32:03.547478  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.547823  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.547862  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:32:03.547879  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.548125  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:32:03.548328  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:32:03.548414  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:32:03.548436  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:32:03.548556  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:32:03.548770  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHPort
	I0731 19:32:03.548794  446963 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:32:03.548987  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHKeyPath
	I0731 19:32:03.549148  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetSSHUsername
	I0731 19:32:03.549311  446963 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/kubernetes-upgrade-916231/id_rsa Username:docker}
	I0731 19:32:03.688175  446963 ssh_runner.go:195] Run: systemctl --version
	I0731 19:32:03.717247  446963 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:32:04.304079  446963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:32:04.389223  446963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:32:04.389310  446963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:32:04.496352  446963 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 19:32:04.496404  446963 start.go:495] detecting cgroup driver to use...
	I0731 19:32:04.496484  446963 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:32:04.662576  446963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:32:04.725545  446963 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:32:04.725595  446963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:32:04.891213  446963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:32:04.942457  446963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:32:05.345807  446963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:32:05.627989  446963 docker.go:233] disabling docker service ...
	I0731 19:32:05.628075  446963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:32:05.653444  446963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:32:05.676652  446963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:32:05.912603  446963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:32:06.156737  446963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:32:06.174549  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:32:06.199718  446963 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0731 19:32:06.199820  446963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.214743  446963 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:32:06.214833  446963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.232073  446963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.247674  446963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.263718  446963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:32:06.279490  446963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.295122  446963 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.310969  446963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:32:06.326409  446963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:32:06.340688  446963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:32:06.359649  446963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:32:06.592084  446963 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:33:37.255375  446963 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.663242268s)
	I0731 19:33:37.255415  446963 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:33:37.255494  446963 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:33:37.262802  446963 start.go:563] Will wait 60s for crictl version
	I0731 19:33:37.262874  446963 ssh_runner.go:195] Run: which crictl
	I0731 19:33:37.268920  446963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:33:37.328495  446963 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:33:37.328600  446963 ssh_runner.go:195] Run: crio --version
	I0731 19:33:37.377264  446963 ssh_runner.go:195] Run: crio --version
	I0731 19:33:37.436874  446963 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0731 19:33:37.438566  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) Calling .GetIP
	I0731 19:33:37.442505  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:33:37.442978  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:c3:a5", ip: ""} in network mk-kubernetes-upgrade-916231: {Iface:virbr2 ExpiryTime:2024-07-31 20:31:22 +0000 UTC Type:0 Mac:52:54:00:1a:c3:a5 Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:kubernetes-upgrade-916231 Clientid:01:52:54:00:1a:c3:a5}
	I0731 19:33:37.443015  446963 main.go:141] libmachine: (kubernetes-upgrade-916231) DBG | domain kubernetes-upgrade-916231 has defined IP address 192.168.50.208 and MAC address 52:54:00:1a:c3:a5 in network mk-kubernetes-upgrade-916231
	I0731 19:33:37.443513  446963 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0731 19:33:37.449730  446963 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:33:37.449879  446963 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 19:33:37.449941  446963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:33:37.513789  446963 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:33:37.513821  446963 crio.go:433] Images already preloaded, skipping extraction
	I0731 19:33:37.513890  446963 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:33:37.564671  446963 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:33:37.564706  446963 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:33:37.564717  446963 kubeadm.go:934] updating node { 192.168.50.208 8443 v1.31.0-beta.0 crio true true} ...
	I0731 19:33:37.564886  446963 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-916231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:33:37.564984  446963 ssh_runner.go:195] Run: crio config
	I0731 19:33:37.638615  446963 cni.go:84] Creating CNI manager for ""
	I0731 19:33:37.638645  446963 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:33:37.638662  446963 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:33:37.638693  446963 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.208 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-916231 NodeName:kubernetes-upgrade-916231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:33:37.638882  446963 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-916231"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:33:37.638958  446963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0731 19:33:37.663669  446963 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:33:37.663747  446963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:33:37.677890  446963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0731 19:33:37.703462  446963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0731 19:33:37.729970  446963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0731 19:33:37.756704  446963 ssh_runner.go:195] Run: grep 192.168.50.208	control-plane.minikube.internal$ /etc/hosts
	I0731 19:33:37.762620  446963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:33:38.005230  446963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:33:38.028657  446963 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231 for IP: 192.168.50.208
	I0731 19:33:38.028735  446963 certs.go:194] generating shared ca certs ...
	I0731 19:33:38.028763  446963 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:33:38.028971  446963 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:33:38.029067  446963 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:33:38.029107  446963 certs.go:256] generating profile certs ...
	I0731 19:33:38.029287  446963 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/client.key
	I0731 19:33:38.029421  446963 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key.7a7f958c
	I0731 19:33:38.029497  446963 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.key
	I0731 19:33:38.029658  446963 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:33:38.029696  446963 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:33:38.029731  446963 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:33:38.029781  446963 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:33:38.029852  446963 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:33:38.029914  446963 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:33:38.030028  446963 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:33:38.031007  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:33:38.068197  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:33:38.139097  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:33:38.268522  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:33:38.299245  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 19:33:38.393194  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:33:38.797148  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:33:38.956658  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kubernetes-upgrade-916231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 19:33:39.014086  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:33:39.081264  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:33:39.132432  446963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:33:39.238613  446963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:33:39.305078  446963 ssh_runner.go:195] Run: openssl version
	I0731 19:33:39.314060  446963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:33:39.356872  446963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:33:39.385118  446963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:33:39.385231  446963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:33:39.418082  446963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:33:39.442090  446963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:33:39.460178  446963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:33:39.467414  446963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:33:39.467489  446963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:33:39.476874  446963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:33:39.491454  446963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:33:39.510364  446963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:33:39.518635  446963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:33:39.518715  446963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:33:39.527188  446963 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:33:39.541674  446963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:33:39.548534  446963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:33:39.557856  446963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:33:39.566819  446963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:33:39.576476  446963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:33:39.586141  446963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:33:39.594572  446963 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:33:39.602795  446963 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-916231 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-916231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:33:39.602914  446963 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:33:39.602986  446963 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:33:39.659884  446963 cri.go:89] found id: "c761226fab22f0e08fddb1244df3e611f8a5c1e3827768f356089433a39607a8"
	I0731 19:33:39.659914  446963 cri.go:89] found id: "fe0810149a5b791df7b9a53345331ea1633f76df66c805c47c57498132df8dbf"
	I0731 19:33:39.659921  446963 cri.go:89] found id: "3d504f675a2570b5b47b4fc5b3cc702cf7c3d2a5be556a8def0fa8e2b9a74a34"
	I0731 19:33:39.659928  446963 cri.go:89] found id: "21d6ebcfda7970b24a9d1e5d8625012db3a36ad07a9e4a019310da5907a1798c"
	I0731 19:33:39.659933  446963 cri.go:89] found id: "07ff36bd36c4d024663c5de4d38f184fccac00e435f3b7480915f45a30fbff14"
	I0731 19:33:39.659937  446963 cri.go:89] found id: "12ec1c43a72c0822ae2e2e9ad541f7570d815484a945a803fe3733302c8aea7f"
	I0731 19:33:39.659941  446963 cri.go:89] found id: "2b2e59eec9ebcc09fc55acbe93096b4bb9efd6a4f95d23ad74206a17fbac8afd"
	I0731 19:33:39.659945  446963 cri.go:89] found id: "880afd9ae98e1edbe9ecb7a9cf890d660a59b32bf73f5127366dc0d289ee0749"
	I0731 19:33:39.659949  446963 cri.go:89] found id: "dd1d64a18593ad9d0a64715a1af547a06558f26815a5596dd64d6983894cdeef"
	I0731 19:33:39.659957  446963 cri.go:89] found id: "59cff1846da6dd1396bb738a24f7f9e7c7a522b668cb4fda7078ce2f83152cee"
	I0731 19:33:39.659961  446963 cri.go:89] found id: "ab07df3064e7c82a92d1c222bb0543334ec8ee3265b4142546cd465c3554ed13"
	I0731 19:33:39.659966  446963 cri.go:89] found id: "4edb08ec468dc7d43aac501df69cde4a9f2cedf7345149e28053767895f29c4e"
	I0731 19:33:39.659970  446963 cri.go:89] found id: ""
	I0731 19:33:39.660022  446963 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-916231 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-31 19:45:52.84471981 +0000 UTC m=+5459.664232791
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-916231 -n kubernetes-upgrade-916231
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-916231 -n kubernetes-upgrade-916231: exit status 2 (238.205041ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-916231 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-170831 sudo cat                              | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo                                  | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo                                  | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo                                  | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo cat                              | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo cat                              | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo                                  | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo                                  | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo                                  | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo find                             | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-170831 sudo crio                             | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-170831                                       | bridge-170831          | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:36 UTC |
	| start   | -p embed-certs-096465                                  | embed-certs-096465     | jenkins | v1.33.1 | 31 Jul 24 19:36 UTC | 31 Jul 24 19:38 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-096465            | embed-certs-096465     | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC | 31 Jul 24 19:38 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-096465                                  | embed-certs-096465     | jenkins | v1.33.1 | 31 Jul 24 19:38 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-417122             | no-preload-417122      | jenkins | v1.33.1 | 31 Jul 24 19:39 UTC | 31 Jul 24 19:39 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-417122                                   | no-preload-417122      | jenkins | v1.33.1 | 31 Jul 24 19:39 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-553149        | old-k8s-version-553149 | jenkins | v1.33.1 | 31 Jul 24 19:40 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-096465                 | embed-certs-096465     | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-096465                                  | embed-certs-096465     | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-417122                  | no-preload-417122      | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-417122 --memory=2200                     | no-preload-417122      | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-553149                              | old-k8s-version-553149 | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC | 31 Jul 24 19:41 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-553149             | old-k8s-version-553149 | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC | 31 Jul 24 19:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-553149                              | old-k8s-version-553149 | jenkins | v1.33.1 | 31 Jul 24 19:41 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:41:56
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:41:56.933185  461577 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:41:56.933416  461577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:41:56.933425  461577 out.go:304] Setting ErrFile to fd 2...
	I0731 19:41:56.933429  461577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:41:56.933634  461577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:41:56.934178  461577 out.go:298] Setting JSON to false
	I0731 19:41:56.935182  461577 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":12260,"bootTime":1722442657,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:41:56.935243  461577 start.go:139] virtualization: kvm guest
	I0731 19:41:56.937265  461577 out.go:177] * [old-k8s-version-553149] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:41:56.938500  461577 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:41:56.938546  461577 notify.go:220] Checking for updates...
	I0731 19:41:56.940971  461577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:41:56.942246  461577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:41:56.943688  461577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:41:56.944923  461577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:41:56.946031  461577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:41:56.947687  461577 config.go:182] Loaded profile config "old-k8s-version-553149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 19:41:56.948148  461577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:41:56.948203  461577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:41:56.963178  461577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35267
	I0731 19:41:56.963573  461577 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:41:56.964046  461577 main.go:141] libmachine: Using API Version  1
	I0731 19:41:56.964067  461577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:41:56.964427  461577 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:41:56.964619  461577 main.go:141] libmachine: (old-k8s-version-553149) Calling .DriverName
	I0731 19:41:56.966437  461577 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 19:41:56.967729  461577 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:41:56.968034  461577 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:41:56.968102  461577 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:41:56.982751  461577 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35225
	I0731 19:41:56.983207  461577 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:41:56.983657  461577 main.go:141] libmachine: Using API Version  1
	I0731 19:41:56.983677  461577 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:41:56.983979  461577 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:41:56.984138  461577 main.go:141] libmachine: (old-k8s-version-553149) Calling .DriverName
	I0731 19:41:57.019457  461577 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:41:57.020720  461577 start.go:297] selected driver: kvm2
	I0731 19:41:57.020737  461577 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-553149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-553149 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:41:57.020898  461577 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:41:57.021867  461577 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:41:57.021943  461577 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:41:57.037311  461577 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:41:57.037712  461577 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:41:57.037794  461577 cni.go:84] Creating CNI manager for ""
	I0731 19:41:57.037808  461577 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:41:57.037872  461577 start.go:340] cluster config:
	{Name:old-k8s-version-553149 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-553149 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.12 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:41:57.037993  461577 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:41:57.040224  461577 out.go:177] * Starting "old-k8s-version-553149" primary control-plane node in "old-k8s-version-553149" cluster
	I0731 19:41:54.180651  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:41:57.041422  461577 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 19:41:57.041459  461577 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 19:41:57.041472  461577 cache.go:56] Caching tarball of preloaded images
	I0731 19:41:57.041555  461577 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:41:57.041566  461577 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 19:41:57.041670  461577 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/old-k8s-version-553149/config.json ...
	I0731 19:41:57.041888  461577 start.go:360] acquireMachinesLock for old-k8s-version-553149: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:42:00.260693  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:03.332733  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:09.412659  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:12.484715  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:18.564693  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:21.636688  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:27.716621  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:30.788656  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:36.868756  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:39.940709  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:46.020614  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:49.092786  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:55.172663  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:42:58.244727  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:04.324672  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:07.396722  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:13.476670  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:16.548693  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:22.628698  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:25.700686  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:31.780730  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:34.852702  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:40.932660  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:44.004705  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:50.084691  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:53.156699  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:43:59.236696  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:02.308666  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:08.388685  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:11.460759  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:17.540673  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:20.612670  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:26.692624  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:29.764712  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:35.844688  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:38.920718  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:44.996665  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:48.068729  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:54.148632  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:44:57.220669  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:03.300697  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:06.372717  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:12.452684  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:15.524654  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:21.604658  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:24.676668  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:30.756650  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:33.828705  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:39.908689  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:42.980676  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:49.060690  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:52.132719  461134 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.37:22: connect: no route to host
	I0731 19:45:51.946200  446963 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000317142s
	I0731 19:45:51.948076  446963 kubeadm.go:310] 
	I0731 19:45:51.948131  446963 kubeadm.go:310] Unfortunately, an error has occurred:
	I0731 19:45:51.948158  446963 kubeadm.go:310] 	context deadline exceeded
	I0731 19:45:51.948166  446963 kubeadm.go:310] 
	I0731 19:45:51.948197  446963 kubeadm.go:310] This error is likely caused by:
	I0731 19:45:51.948279  446963 kubeadm.go:310] 	- The kubelet is not running
	I0731 19:45:51.948451  446963 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 19:45:51.948464  446963 kubeadm.go:310] 
	I0731 19:45:51.948553  446963 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 19:45:51.948586  446963 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0731 19:45:51.948620  446963 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0731 19:45:51.948624  446963 kubeadm.go:310] 
	I0731 19:45:51.948709  446963 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 19:45:51.948777  446963 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 19:45:51.948868  446963 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0731 19:45:51.949002  446963 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 19:45:51.949100  446963 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0731 19:45:51.949203  446963 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0731 19:45:51.950331  446963 kubeadm.go:310] W0731 19:41:49.750511   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 19:45:51.950589  446963 kubeadm.go:310] W0731 19:41:49.751233   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0731 19:45:51.950693  446963 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:45:51.950840  446963 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0731 19:45:51.950932  446963 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 19:45:51.951043  446963 kubeadm.go:394] duration metric: took 12m12.348266566s to StartCluster
	I0731 19:45:51.951171  446963 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 19:45:51.951277  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 19:45:51.995555  446963 cri.go:89] found id: ""
	I0731 19:45:51.995597  446963 logs.go:276] 0 containers: []
	W0731 19:45:51.995608  446963 logs.go:278] No container was found matching "kube-apiserver"
	I0731 19:45:51.995616  446963 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 19:45:51.995688  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 19:45:52.030513  446963 cri.go:89] found id: "39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee"
	I0731 19:45:52.030544  446963 cri.go:89] found id: ""
	I0731 19:45:52.030554  446963 logs.go:276] 1 containers: [39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee]
	I0731 19:45:52.030625  446963 ssh_runner.go:195] Run: which crictl
	I0731 19:45:52.035409  446963 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 19:45:52.035479  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 19:45:52.071705  446963 cri.go:89] found id: ""
	I0731 19:45:52.071736  446963 logs.go:276] 0 containers: []
	W0731 19:45:52.071746  446963 logs.go:278] No container was found matching "coredns"
	I0731 19:45:52.071775  446963 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 19:45:52.071831  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 19:45:52.105267  446963 cri.go:89] found id: ""
	I0731 19:45:52.105299  446963 logs.go:276] 0 containers: []
	W0731 19:45:52.105308  446963 logs.go:278] No container was found matching "kube-scheduler"
	I0731 19:45:52.105315  446963 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 19:45:52.105381  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 19:45:52.140612  446963 cri.go:89] found id: ""
	I0731 19:45:52.140642  446963 logs.go:276] 0 containers: []
	W0731 19:45:52.140655  446963 logs.go:278] No container was found matching "kube-proxy"
	I0731 19:45:52.140665  446963 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 19:45:52.140722  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 19:45:52.176608  446963 cri.go:89] found id: ""
	I0731 19:45:52.176646  446963 logs.go:276] 0 containers: []
	W0731 19:45:52.176659  446963 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 19:45:52.176667  446963 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 19:45:52.176725  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 19:45:52.211587  446963 cri.go:89] found id: ""
	I0731 19:45:52.211618  446963 logs.go:276] 0 containers: []
	W0731 19:45:52.211629  446963 logs.go:278] No container was found matching "kindnet"
	I0731 19:45:52.211637  446963 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0731 19:45:52.211704  446963 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0731 19:45:52.247349  446963 cri.go:89] found id: ""
	I0731 19:45:52.247381  446963 logs.go:276] 0 containers: []
	W0731 19:45:52.247393  446963 logs.go:278] No container was found matching "storage-provisioner"
	I0731 19:45:52.247418  446963 logs.go:123] Gathering logs for describe nodes ...
	I0731 19:45:52.247436  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 19:45:52.324701  446963 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 19:45:52.324735  446963 logs.go:123] Gathering logs for etcd [39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee] ...
	I0731 19:45:52.324757  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee"
	I0731 19:45:52.366209  446963 logs.go:123] Gathering logs for CRI-O ...
	I0731 19:45:52.366244  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 19:45:52.551574  446963 logs.go:123] Gathering logs for container status ...
	I0731 19:45:52.551620  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 19:45:52.605362  446963 logs.go:123] Gathering logs for kubelet ...
	I0731 19:45:52.605393  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 19:45:52.801039  446963 logs.go:123] Gathering logs for dmesg ...
	I0731 19:45:52.801087  446963 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0731 19:45:52.816649  446963 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.145265ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000317142s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0731 19:41:49.750511   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0731 19:41:49.751233   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 19:45:52.816703  446963 out.go:239] * 
	W0731 19:45:52.816866  446963 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.145265ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000317142s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0731 19:41:49.750511   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0731 19:41:49.751233   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:45:52.816895  446963 out.go:239] * 
	W0731 19:45:52.817804  446963 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 19:45:52.821417  446963 out.go:177] 
	W0731 19:45:52.823113  446963 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.31.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.145265ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000317142s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
	W0731 19:41:49.750511   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	W0731 19:41:49.751233   10374 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:45:52.823185  446963 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 19:45:52.823222  446963 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 19:45:52.825020  446963 out.go:177] 
	
	
	==> CRI-O <==
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.451206169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53fd253b-4ea2-4d5c-b4f1-50ad7f68938f name=/runtime.v1.RuntimeService/Version
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.452577965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fec49645-26b8-469d-813c-51a1b80581d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.453005725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455153452982737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fec49645-26b8-469d-813c-51a1b80581d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.453500753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c51d10c2-ae77-4be9-8bdf-709ea2a63986 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.453587242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c51d10c2-ae77-4be9-8bdf-709ea2a63986 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.453650033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee,PodSandboxId:52772a1b6b98bb2c394845aa34c68e975de26566b32077c0a962fdb2fd1f9993,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722454912443468258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5af1ccec48e1cb8ac1aa7c6116569f,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c51d10c2-ae77-4be9-8bdf-709ea2a63986 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.486437662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=209af2ce-6a49-4a96-8ac3-11dd9c8516d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.486525041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=209af2ce-6a49-4a96-8ac3-11dd9c8516d3 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.487724201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=161cc079-f7c5-47d1-970b-d993bffac7ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.488186405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455153488163638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=161cc079-f7c5-47d1-970b-d993bffac7ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.488712614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c431bc0a-217b-4f91-a8af-c93fd926321f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.488761012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c431bc0a-217b-4f91-a8af-c93fd926321f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.488819423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee,PodSandboxId:52772a1b6b98bb2c394845aa34c68e975de26566b32077c0a962fdb2fd1f9993,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722454912443468258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5af1ccec48e1cb8ac1aa7c6116569f,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c431bc0a-217b-4f91-a8af-c93fd926321f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.492525786Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=76432b24-19d4-4e51-9201-66768a27bf98 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.492683200Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d40ffb905d0c68a03b659015de13585ad6a2db3ed0eeee129b09df8374b0d467,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-916231,Uid:41e3eded19f8063b8096e97546da1a24,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722454912229105200,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41e3eded19f8063b8096e97546da1a24,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.208:8443,kubernetes.io/config.hash: 41e3eded19f8063b8096e97546da1a24,kubernetes.io/config.seen: 2024-07-31T19:41:51.778517336Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:52772
a1b6b98bb2c394845aa34c68e975de26566b32077c0a962fdb2fd1f9993,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-916231,Uid:0d5af1ccec48e1cb8ac1aa7c6116569f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722454912225198037,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5af1ccec48e1cb8ac1aa7c6116569f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.208:2379,kubernetes.io/config.hash: 0d5af1ccec48e1cb8ac1aa7c6116569f,kubernetes.io/config.seen: 2024-07-31T19:41:51.778515979Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:06ac0903055330e3f6b39738d97a3f1e5f30784e0e79b7de8e4a7823eba00e11,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-916231,Uid:ce78262c17f60b2fcb2a992a4d58c18c,Namespace:kube-system,Attempt:0,},State:SANDBOX_
READY,CreatedAt:1722454912214196231,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce78262c17f60b2fcb2a992a4d58c18c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ce78262c17f60b2fcb2a992a4d58c18c,kubernetes.io/config.seen: 2024-07-31T19:41:51.778518764Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d9c98629cfa72a800021ec6f12d18a37158417ad0f7170c742c3813dd702036,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-916231,Uid:a6e0a4e6b8e0fe4db13f1604334a51ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722454912209692011,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6e0a4e6b
8e0fe4db13f1604334a51ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a6e0a4e6b8e0fe4db13f1604334a51ad,kubernetes.io/config.seen: 2024-07-31T19:41:51.778512098Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=76432b24-19d4-4e51-9201-66768a27bf98 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.493183182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=556c1005-9a11-4361-acb5-60bcaa81b1db name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.493256457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=556c1005-9a11-4361-acb5-60bcaa81b1db name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.493330305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee,PodSandboxId:52772a1b6b98bb2c394845aa34c68e975de26566b32077c0a962fdb2fd1f9993,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722454912443468258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5af1ccec48e1cb8ac1aa7c6116569f,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=556c1005-9a11-4361-acb5-60bcaa81b1db name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.523336023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30b16d88-4be2-4a68-98a0-d0cb92db3680 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.523441732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30b16d88-4be2-4a68-98a0-d0cb92db3680 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.524543177Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01e1e350-b541-41e4-88e8-2278b7302429 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.525119726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722455153525094314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01e1e350-b541-41e4-88e8-2278b7302429 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.525657067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ff50e84-ecc5-43e4-a821-4d27213803f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.525725592Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ff50e84-ecc5-43e4-a821-4d27213803f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:45:53 kubernetes-upgrade-916231 crio[3156]: time="2024-07-31 19:45:53.525785013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee,PodSandboxId:52772a1b6b98bb2c394845aa34c68e975de26566b32077c0a962fdb2fd1f9993,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722454912443468258,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-916231,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5af1ccec48e1cb8ac1aa7c6116569f,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ff50e84-ecc5-43e4-a821-4d27213803f4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	39747e8a9375a       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   4 minutes ago       Running             etcd                4                   52772a1b6b98b       etcd-kubernetes-upgrade-916231
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +5.927981] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.070026] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.074415] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.216247] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.136271] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.339587] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +4.345987] systemd-fstab-generator[731]: Ignoring "noauto" option for root device
	[  +0.077779] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.452773] systemd-fstab-generator[852]: Ignoring "noauto" option for root device
	[  +7.992106] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.146490] kauditd_printk_skb: 97 callbacks suppressed
	[  +4.999497] kauditd_printk_skb: 50 callbacks suppressed
	[Jul31 19:32] kauditd_printk_skb: 49 callbacks suppressed
	[  +1.529631] systemd-fstab-generator[2811]: Ignoring "noauto" option for root device
	[  +0.318448] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +0.262961] systemd-fstab-generator[2964]: Ignoring "noauto" option for root device
	[  +0.272089] systemd-fstab-generator[2982]: Ignoring "noauto" option for root device
	[  +0.431256] systemd-fstab-generator[3010]: Ignoring "noauto" option for root device
	[Jul31 19:33] systemd-fstab-generator[3300]: Ignoring "noauto" option for root device
	[  +0.156207] kauditd_printk_skb: 210 callbacks suppressed
	[  +3.434800] systemd-fstab-generator[3857]: Ignoring "noauto" option for root device
	[Jul31 19:37] kauditd_printk_skb: 117 callbacks suppressed
	[  +2.577431] systemd-fstab-generator[10029]: Ignoring "noauto" option for root device
	[Jul31 19:41] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.873446] systemd-fstab-generator[10400]: Ignoring "noauto" option for root device
	
	
	==> etcd [39747e8a9375a3cf366554fd10b95ab6b0b7103fd13d3f441bb80b1857c038ee] <==
	{"level":"info","ts":"2024-07-31T19:41:52.584763Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T19:41:52.584814Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.208:2380"}
	{"level":"info","ts":"2024-07-31T19:41:52.584842Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.208:2380"}
	{"level":"info","ts":"2024-07-31T19:41:52.586297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 switched to configuration voters=(3568824936382685312)"}
	{"level":"info","ts":"2024-07-31T19:41:52.586583Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","added-peer-id":"3187035f06a56080","added-peer-peer-urls":["https://192.168.50.208:2380"]}
	{"level":"info","ts":"2024-07-31T19:41:53.466256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-31T19:41:53.466296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-31T19:41:53.466312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 received MsgPreVoteResp from 3187035f06a56080 at term 1"}
	{"level":"info","ts":"2024-07-31T19:41:53.466323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:41:53.466329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 received MsgVoteResp from 3187035f06a56080 at term 2"}
	{"level":"info","ts":"2024-07-31T19:41:53.466337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3187035f06a56080 became leader at term 2"}
	{"level":"info","ts":"2024-07-31T19:41:53.466344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3187035f06a56080 elected leader 3187035f06a56080 at term 2"}
	{"level":"info","ts":"2024-07-31T19:41:53.469163Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3187035f06a56080","local-member-attributes":"{Name:kubernetes-upgrade-916231 ClientURLs:[https://192.168.50.208:2379]}","request-path":"/0/members/3187035f06a56080/attributes","cluster-id":"9c0c31ebbc007527","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:41:53.46931Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:41:53.469451Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:41:53.469835Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:41:53.469955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:41:53.46997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:41:53.470528Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T19:41:53.471287Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T19:41:53.471327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:41:53.472009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.208:2379"}
	{"level":"info","ts":"2024-07-31T19:41:53.472076Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9c0c31ebbc007527","local-member-id":"3187035f06a56080","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:41:53.472142Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T19:41:53.472165Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:45:53 up 14 min,  0 users,  load average: 0.01, 0.13, 0.16
	Linux kubernetes-upgrade-916231 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 31 19:45:41 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:41.453820   10407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.208:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-916231.17e7639dc8cd8353  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-916231,UID:kubernetes-upgrade-916231,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-916231 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-916231,},FirstTimestamp:2024-07-31 19:41:51.822431059 +0000 UTC m=+0.416412673,LastTimestamp:2024-07-31 19:41:51.822431059 +0000 UTC m=+0.416412673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Rep
ortingController:kubelet,ReportingInstance:kubernetes-upgrade-916231,}"
	Jul 31 19:45:41 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:41.456664   10407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-916231?timeout=10s\": dial tcp 192.168.50.208:8443: connect: connection refused" interval="7s"
	Jul 31 19:45:41 kubernetes-upgrade-916231 kubelet[10407]: I0731 19:45:41.614349   10407 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-916231"
	Jul 31 19:45:41 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:41.616360   10407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.208:8443: connect: connection refused" node="kubernetes-upgrade-916231"
	Jul 31 19:45:41 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:41.860523   10407 eviction_manager.go:283] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-916231\" not found"
	Jul 31 19:45:46 kubernetes-upgrade-916231 kubelet[10407]: W0731 19:45:46.662952   10407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.208:8443: connect: connection refused
	Jul 31 19:45:46 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:46.663045   10407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.208:8443: connect: connection refused" logger="UnhandledError"
	Jul 31 19:45:47 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:47.804052   10407 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-916231_kube-system_ce78262c17f60b2fcb2a992a4d58c18c_1\" is already in use by bb96bd0b7893ad81c01b73366ac8cbdcefbb73c00ca3f7b62046b36e84e94a42. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="06ac0903055330e3f6b39738d97a3f1e5f30784e0e79b7de8e4a7823eba00e11"
	Jul 31 19:45:47 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:47.804447   10407 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.31.0-beta.0,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=
10.96.0.0/12 --use-service-account-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathEx
pr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-ma
nager-kubernetes-upgrade-916231_kube-system(ce78262c17f60b2fcb2a992a4d58c18c): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-916231_kube-system_ce78262c17f60b2fcb2a992a4d58c18c_1\" is already in use by bb96bd0b7893ad81c01b73366ac8cbdcefbb73c00ca3f7b62046b36e84e94a42. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jul 31 19:45:47 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:47.805783   10407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-916231_kube-system_ce78262c17f60b2fcb2a992a4d58c18c_1\\\" is already in use by bb96bd0b7893ad81c01b73366ac8cbdcefbb73c00ca3f7b62046b36e84e94a42. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-916231" podUID="ce78262c17f60b2fcb2a992a4d58c18c"
	Jul 31 19:45:48 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:48.458231   10407 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-916231?timeout=10s\": dial tcp 192.168.50.208:8443: connect: connection refused" interval="7s"
	Jul 31 19:45:48 kubernetes-upgrade-916231 kubelet[10407]: I0731 19:45:48.618727   10407 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-916231"
	Jul 31 19:45:48 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:48.619822   10407 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.208:8443: connect: connection refused" node="kubernetes-upgrade-916231"
	Jul 31 19:45:48 kubernetes-upgrade-916231 kubelet[10407]: W0731 19:45:48.806634   10407 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.208:8443: connect: connection refused
	Jul 31 19:45:48 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:48.806762   10407 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.50.208:8443: connect: connection refused" logger="UnhandledError"
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:51.456761   10407 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.208:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-916231.17e7639dc8cd8353  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-916231,UID:kubernetes-upgrade-916231,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node kubernetes-upgrade-916231 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-916231,},FirstTimestamp:2024-07-31 19:41:51.822431059 +0000 UTC m=+0.416412673,LastTimestamp:2024-07-31 19:41:51.822431059 +0000 UTC m=+0.416412673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,Rep
ortingController:kubelet,ReportingInstance:kubernetes-upgrade-916231,}"
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:51.801121   10407 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-916231_kube-system_41e3eded19f8063b8096e97546da1a24_1\" is already in use by bfdef29abc3fd44010affe9204319f3a0668ee07cf696641ff7920f427d59c96. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="d40ffb905d0c68a03b659015de13585ad6a2db3ed0eeee129b09df8374b0d467"
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:51.801254   10407 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:kube-apiserver,Image:registry.k8s.io/kube-apiserver:v1.31.0-beta.0,Command:[kube-apiserver --advertise-address=192.168.50.208 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/var/lib/minikube/certs/ca.crt --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --enable-bootstrap-token-auth=true --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key --kubelet-
preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=8443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/minikube/certs/sa.pub --service-account-signing-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/var/lib/minikube/certs/apiserver.crt --tls-private-key-file=/var/lib/minikube/certs/apiserver.key],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{250 -3} {<nil>} 250m DecimalSI},},Claims:[]Resou
rceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.50.208,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8443 },Host:192.168.50.208,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,Timeo
utSeconds:15,PeriodSeconds:1,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8443 },Host:192.168.50.208,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-apiserver-kubernetes-upgrade-916231_kube-system(41e3eded19f8063b8096e97546da1a24): CreateContainerError: the container name \"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-916231_kube-system_41e3eded19f8063b8096e97546da1a24_1\" is already in use by bfdef29abc3fd44010affe92
04319f3a0668ee07cf696641ff7920f427d59c96. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:51.803020   10407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"the container name \\\"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-916231_kube-system_41e3eded19f8063b8096e97546da1a24_1\\\" is already in use by bfdef29abc3fd44010affe9204319f3a0668ee07cf696641ff7920f427d59c96. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-916231" podUID="41e3eded19f8063b8096e97546da1a24"
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:51.824440   10407 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 19:45:51 kubernetes-upgrade-916231 kubelet[10407]: E0731 19:45:51.861125   10407 eviction_manager.go:283] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"kubernetes-upgrade-916231\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-916231 -n kubernetes-upgrade-916231
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-916231 -n kubernetes-upgrade-916231: exit status 2 (229.748094ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-916231" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-916231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-916231
--- FAIL: TestKubernetesUpgrade (1221.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-693348 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-693348 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.884121241s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-693348] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-693348" primary control-plane node in "pause-693348" cluster
	* Updating the running kvm2 "pause-693348" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-693348" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:29:35.512846  444999 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:29:35.512956  444999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:29:35.512961  444999 out.go:304] Setting ErrFile to fd 2...
	I0731 19:29:35.512965  444999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:29:35.513157  444999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:29:35.513704  444999 out.go:298] Setting JSON to false
	I0731 19:29:35.514730  444999 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11518,"bootTime":1722442657,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:29:35.514793  444999 start.go:139] virtualization: kvm guest
	I0731 19:29:35.517380  444999 out.go:177] * [pause-693348] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:29:35.519102  444999 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:29:35.519159  444999 notify.go:220] Checking for updates...
	I0731 19:29:35.522151  444999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:29:35.523403  444999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:29:35.524660  444999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:29:35.525950  444999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:29:35.527388  444999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:29:35.529592  444999 config.go:182] Loaded profile config "pause-693348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:35.530284  444999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:35.530345  444999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:35.546352  444999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39471
	I0731 19:29:35.546783  444999 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:35.547382  444999 main.go:141] libmachine: Using API Version  1
	I0731 19:29:35.547407  444999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:35.547735  444999 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:35.547976  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:35.548282  444999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:29:35.548734  444999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:35.548789  444999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:35.564128  444999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I0731 19:29:35.564693  444999 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:35.565279  444999 main.go:141] libmachine: Using API Version  1
	I0731 19:29:35.565307  444999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:35.565662  444999 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:35.565907  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:35.604460  444999 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 19:29:35.605775  444999 start.go:297] selected driver: kvm2
	I0731 19:29:35.605790  444999 start.go:901] validating driver "kvm2" against &{Name:pause-693348 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-693348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:35.605999  444999 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:29:35.606442  444999 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:29:35.606539  444999 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:29:35.623759  444999 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:29:35.624870  444999 cni.go:84] Creating CNI manager for ""
	I0731 19:29:35.624895  444999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:35.625009  444999 start.go:340] cluster config:
	{Name:pause-693348 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-693348 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:35.625209  444999 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:29:35.627685  444999 out.go:177] * Starting "pause-693348" primary control-plane node in "pause-693348" cluster
	I0731 19:29:35.629088  444999 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:29:35.629137  444999 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:29:35.629146  444999 cache.go:56] Caching tarball of preloaded images
	I0731 19:29:35.629230  444999 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:29:35.629242  444999 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:29:35.629355  444999 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/config.json ...
	I0731 19:29:35.629567  444999 start.go:360] acquireMachinesLock for pause-693348: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:29:48.257730  444999 start.go:364] duration metric: took 12.628113743s to acquireMachinesLock for "pause-693348"
	I0731 19:29:48.257788  444999 start.go:96] Skipping create...Using existing machine configuration
	I0731 19:29:48.257798  444999 fix.go:54] fixHost starting: 
	I0731 19:29:48.258256  444999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:29:48.258329  444999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:29:48.275829  444999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0731 19:29:48.276348  444999 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:29:48.276867  444999 main.go:141] libmachine: Using API Version  1
	I0731 19:29:48.276897  444999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:29:48.277296  444999 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:29:48.277515  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:48.277719  444999 main.go:141] libmachine: (pause-693348) Calling .GetState
	I0731 19:29:48.279339  444999 fix.go:112] recreateIfNeeded on pause-693348: state=Running err=<nil>
	W0731 19:29:48.279362  444999 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 19:29:48.281201  444999 out.go:177] * Updating the running kvm2 "pause-693348" VM ...
	I0731 19:29:48.282453  444999 machine.go:94] provisionDockerMachine start ...
	I0731 19:29:48.282475  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:48.282683  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:48.285678  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.286193  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:48.286233  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.286412  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:48.286728  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.286865  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.287034  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:48.287241  444999 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:48.287451  444999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0731 19:29:48.287465  444999 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 19:29:48.405020  444999 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-693348
	
	I0731 19:29:48.405056  444999 main.go:141] libmachine: (pause-693348) Calling .GetMachineName
	I0731 19:29:48.405365  444999 buildroot.go:166] provisioning hostname "pause-693348"
	I0731 19:29:48.405416  444999 main.go:141] libmachine: (pause-693348) Calling .GetMachineName
	I0731 19:29:48.405609  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:48.408951  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.409508  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:48.409546  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.409821  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:48.410048  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.410217  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.410362  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:48.410615  444999 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:48.410843  444999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0731 19:29:48.410863  444999 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-693348 && echo "pause-693348" | sudo tee /etc/hostname
	I0731 19:29:48.543273  444999 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-693348
	
	I0731 19:29:48.543300  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:48.546364  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.546782  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:48.546806  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.547058  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:48.547328  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.547511  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.547619  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:48.547860  444999 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:48.548114  444999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0731 19:29:48.548141  444999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-693348' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-693348/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-693348' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:29:48.661663  444999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:29:48.661701  444999 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:29:48.661758  444999 buildroot.go:174] setting up certificates
	I0731 19:29:48.661775  444999 provision.go:84] configureAuth start
	I0731 19:29:48.661793  444999 main.go:141] libmachine: (pause-693348) Calling .GetMachineName
	I0731 19:29:48.662097  444999 main.go:141] libmachine: (pause-693348) Calling .GetIP
	I0731 19:29:48.665096  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.665425  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:48.665457  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.665564  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:48.667988  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.668367  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:48.668407  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.668597  444999 provision.go:143] copyHostCerts
	I0731 19:29:48.668664  444999 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:29:48.668689  444999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:29:48.668757  444999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:29:48.668941  444999 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:29:48.668957  444999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:29:48.668993  444999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:29:48.669074  444999 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:29:48.669084  444999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:29:48.669117  444999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:29:48.669181  444999 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.pause-693348 san=[127.0.0.1 192.168.39.242 localhost minikube pause-693348]
	I0731 19:29:48.814639  444999 provision.go:177] copyRemoteCerts
	I0731 19:29:48.814705  444999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:29:48.814732  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:48.817985  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.818423  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:48.818454  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:48.818669  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:48.818918  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:48.819108  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:48.819278  444999 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/pause-693348/id_rsa Username:docker}
	I0731 19:29:48.912091  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 19:29:48.941960  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 19:29:48.970132  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:29:49.005356  444999 provision.go:87] duration metric: took 343.563661ms to configureAuth
	I0731 19:29:49.005390  444999 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:29:49.005654  444999 config.go:182] Loaded profile config "pause-693348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:29:49.005752  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:49.009324  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:49.009761  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:49.009791  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:49.010088  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:49.010342  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:49.010543  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:49.010725  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:49.010959  444999 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:49.011197  444999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0731 19:29:49.011226  444999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:29:54.583813  444999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:29:54.583846  444999 machine.go:97] duration metric: took 6.301375513s to provisionDockerMachine
	I0731 19:29:54.583860  444999 start.go:293] postStartSetup for "pause-693348" (driver="kvm2")
	I0731 19:29:54.583876  444999 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:29:54.583925  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:54.584294  444999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:29:54.584319  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:54.587574  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.588165  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:54.588198  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.588358  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:54.588582  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:54.588750  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:54.588894  444999 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/pause-693348/id_rsa Username:docker}
	I0731 19:29:54.676786  444999 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:29:54.681274  444999 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:29:54.681306  444999 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:29:54.681389  444999 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:29:54.681534  444999 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:29:54.681680  444999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:29:54.692207  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:29:54.716511  444999 start.go:296] duration metric: took 132.631274ms for postStartSetup
	I0731 19:29:54.716563  444999 fix.go:56] duration metric: took 6.458764378s for fixHost
	I0731 19:29:54.716591  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:54.719847  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.720162  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:54.720194  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.720437  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:54.720655  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:54.720781  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:54.720924  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:54.721126  444999 main.go:141] libmachine: Using SSH client type: native
	I0731 19:29:54.721360  444999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I0731 19:29:54.721377  444999 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 19:29:54.833482  444999 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454194.829428157
	
	I0731 19:29:54.833509  444999 fix.go:216] guest clock: 1722454194.829428157
	I0731 19:29:54.833531  444999 fix.go:229] Guest: 2024-07-31 19:29:54.829428157 +0000 UTC Remote: 2024-07-31 19:29:54.716568925 +0000 UTC m=+19.241641689 (delta=112.859232ms)
	I0731 19:29:54.833566  444999 fix.go:200] guest clock delta is within tolerance: 112.859232ms
	I0731 19:29:54.833574  444999 start.go:83] releasing machines lock for "pause-693348", held for 6.575812094s
	I0731 19:29:54.833601  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:54.833907  444999 main.go:141] libmachine: (pause-693348) Calling .GetIP
	I0731 19:29:54.836979  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.837435  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:54.837466  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.837679  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:54.838362  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:54.838568  444999 main.go:141] libmachine: (pause-693348) Calling .DriverName
	I0731 19:29:54.838720  444999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:29:54.838785  444999 ssh_runner.go:195] Run: cat /version.json
	I0731 19:29:54.838797  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:54.838808  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHHostname
	I0731 19:29:54.841921  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.841963  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.842340  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:54.842372  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.842417  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:54.842437  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:54.842528  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:54.842679  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHPort
	I0731 19:29:54.842736  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:54.842878  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHKeyPath
	I0731 19:29:54.842890  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:54.843079  444999 main.go:141] libmachine: (pause-693348) Calling .GetSSHUsername
	I0731 19:29:54.843169  444999 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/pause-693348/id_rsa Username:docker}
	I0731 19:29:54.843241  444999 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/pause-693348/id_rsa Username:docker}
	I0731 19:29:54.935222  444999 ssh_runner.go:195] Run: systemctl --version
	I0731 19:29:54.954660  444999 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:29:55.114458  444999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:29:55.122884  444999 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:29:55.122962  444999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:29:55.134553  444999 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 19:29:55.134581  444999 start.go:495] detecting cgroup driver to use...
	I0731 19:29:55.134663  444999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:29:55.155022  444999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:29:55.171670  444999 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:29:55.171746  444999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:29:55.187335  444999 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:29:55.202212  444999 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:29:55.346344  444999 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:29:55.494987  444999 docker.go:233] disabling docker service ...
	I0731 19:29:55.495062  444999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:29:55.517588  444999 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:29:55.534710  444999 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:29:55.687865  444999 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:29:55.845295  444999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:29:55.861473  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:29:55.884499  444999 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:29:55.884578  444999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.896159  444999 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:29:55.896256  444999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.907684  444999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.919684  444999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.930821  444999 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:29:55.942254  444999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.953329  444999 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.965263  444999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:29:55.976480  444999 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:29:55.986105  444999 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:29:55.997433  444999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:56.130270  444999 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:29:57.824492  444999 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.694171941s)
	I0731 19:29:57.824538  444999 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:29:57.824601  444999 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:29:57.831674  444999 start.go:563] Will wait 60s for crictl version
	I0731 19:29:57.831748  444999 ssh_runner.go:195] Run: which crictl
	I0731 19:29:57.836952  444999 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:29:57.875560  444999 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:29:57.875648  444999 ssh_runner.go:195] Run: crio --version
	I0731 19:29:57.905872  444999 ssh_runner.go:195] Run: crio --version
	I0731 19:29:57.939262  444999 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:29:57.940660  444999 main.go:141] libmachine: (pause-693348) Calling .GetIP
	I0731 19:29:57.943997  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:57.944465  444999 main.go:141] libmachine: (pause-693348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e7:01", ip: ""} in network mk-pause-693348: {Iface:virbr1 ExpiryTime:2024-07-31 20:28:41 +0000 UTC Type:0 Mac:52:54:00:00:e7:01 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:pause-693348 Clientid:01:52:54:00:00:e7:01}
	I0731 19:29:57.944496  444999 main.go:141] libmachine: (pause-693348) DBG | domain pause-693348 has defined IP address 192.168.39.242 and MAC address 52:54:00:00:e7:01 in network mk-pause-693348
	I0731 19:29:57.944739  444999 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 19:29:57.949517  444999 kubeadm.go:883] updating cluster {Name:pause-693348 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-693348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:29:57.949695  444999 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:29:57.949753  444999 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:29:57.999701  444999 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:29:57.999728  444999 crio.go:433] Images already preloaded, skipping extraction
	I0731 19:29:57.999809  444999 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:29:58.045757  444999 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:29:58.045788  444999 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:29:58.045798  444999 kubeadm.go:934] updating node { 192.168.39.242 8443 v1.30.3 crio true true} ...
	I0731 19:29:58.045926  444999 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-693348 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-693348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:29:58.045997  444999 ssh_runner.go:195] Run: crio config
	I0731 19:29:58.101033  444999 cni.go:84] Creating CNI manager for ""
	I0731 19:29:58.101056  444999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:29:58.101069  444999 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:29:58.101089  444999 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.242 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-693348 NodeName:pause-693348 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:29:58.101245  444999 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-693348"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:29:58.101317  444999 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:29:58.112523  444999 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:29:58.112627  444999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:29:58.122692  444999 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 19:29:58.139687  444999 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:29:58.156831  444999 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0731 19:29:58.174751  444999 ssh_runner.go:195] Run: grep 192.168.39.242	control-plane.minikube.internal$ /etc/hosts
	I0731 19:29:58.179516  444999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:29:58.319500  444999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:29:58.335209  444999 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348 for IP: 192.168.39.242
	I0731 19:29:58.335240  444999 certs.go:194] generating shared ca certs ...
	I0731 19:29:58.335263  444999 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:29:58.335461  444999 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:29:58.335502  444999 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:29:58.335512  444999 certs.go:256] generating profile certs ...
	I0731 19:29:58.335586  444999 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/client.key
	I0731 19:29:58.335637  444999 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/apiserver.key.cd33f1dc
	I0731 19:29:58.335672  444999 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/proxy-client.key
	I0731 19:29:58.335777  444999 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:29:58.335809  444999 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:29:58.335818  444999 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:29:58.335840  444999 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:29:58.335900  444999 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:29:58.335934  444999 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:29:58.335973  444999 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:29:58.336609  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:29:58.371858  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:29:58.401924  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:29:58.430665  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:29:58.469118  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 19:29:58.631311  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 19:29:58.718198  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:29:58.785954  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/pause-693348/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:29:58.903069  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:29:58.944967  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:29:59.060893  444999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:29:59.214948  444999 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:29:59.353977  444999 ssh_runner.go:195] Run: openssl version
	I0731 19:29:59.364605  444999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:29:59.385982  444999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:29:59.391362  444999 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:29:59.391452  444999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:29:59.407190  444999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:29:59.429968  444999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:29:59.466765  444999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:59.475733  444999 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:59.475803  444999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:29:59.510673  444999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:29:59.525924  444999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:29:59.575203  444999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:29:59.581475  444999 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:29:59.581560  444999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:29:59.589162  444999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:29:59.603989  444999 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:29:59.608888  444999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 19:29:59.615307  444999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 19:29:59.623242  444999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 19:29:59.629240  444999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 19:29:59.636060  444999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 19:29:59.641680  444999 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 19:29:59.651202  444999 kubeadm.go:392] StartCluster: {Name:pause-693348 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-693348 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:29:59.651326  444999 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:29:59.651372  444999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:29:59.726620  444999 cri.go:89] found id: "3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355"
	I0731 19:29:59.726641  444999 cri.go:89] found id: "cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400"
	I0731 19:29:59.726645  444999 cri.go:89] found id: "d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6"
	I0731 19:29:59.726647  444999 cri.go:89] found id: "d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1"
	I0731 19:29:59.726650  444999 cri.go:89] found id: "b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6"
	I0731 19:29:59.726653  444999 cri.go:89] found id: "ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd"
	I0731 19:29:59.726656  444999 cri.go:89] found id: "60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486"
	I0731 19:29:59.726658  444999 cri.go:89] found id: "533231090afa8f4e8616058b730098c78d46065abff419edc8b28d2ebf494a0c"
	I0731 19:29:59.726661  444999 cri.go:89] found id: "a00f738011bf10471009bcae207a0de27a0ca9582080714febc0d04bf1989516"
	I0731 19:29:59.726668  444999 cri.go:89] found id: "fb000af18439f1f52a0eb9fb84a52af9284dbcd15cdefc9564ce4d4658a49ba9"
	I0731 19:29:59.726670  444999 cri.go:89] found id: "9b559cc5dbd72656d5a84056ecbd180294d0c90f44ad7502bef2f0c0f906aee3"
	I0731 19:29:59.726672  444999 cri.go:89] found id: ""
	I0731 19:29:59.726722  444999 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-693348 -n pause-693348
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-693348 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-693348 logs -n 25: (3.001980901s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p running-upgrade-043979          | minikube                  | jenkins | v1.26.0 | 31 Jul 24 19:24 UTC | 31 Jul 24 19:26 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p offline-crio-954897             | offline-crio-954897       | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC | 31 Jul 24 19:25 UTC |
	| start   | -p kubernetes-upgrade-916231       | kubernetes-upgrade-916231 | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-114834        | force-systemd-env-114834  | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC | 31 Jul 24 19:25 UTC |
	| start   | -p stopped-upgrade-096992          | minikube                  | jenkins | v1.26.0 | 31 Jul 24 19:25 UTC | 31 Jul 24 19:27 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:27 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-043979          | running-upgrade-043979    | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:28 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:27 UTC |
	| start   | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:27 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-096992 stop        | minikube                  | jenkins | v1.26.0 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:27 UTC |
	| start   | -p stopped-upgrade-096992          | stopped-upgrade-096992    | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:28 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-978325 sudo        | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-043979          | running-upgrade-043979    | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p pause-693348 --memory=2048      | pause-693348              | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:29 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-096992          | stopped-upgrade-096992    | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p cert-expiration-362350          | cert-expiration-362350    | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:29 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-978325 sudo        | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p force-systemd-flag-748014       | force-systemd-flag-748014 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:30 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-693348                    | pause-693348              | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:30 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-748014 ssh cat  | force-systemd-flag-748014 | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC | 31 Jul 24 19:30 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-748014       | force-systemd-flag-748014 | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC | 31 Jul 24 19:30 UTC |
	| start   | -p cert-options-235206             | cert-options-235206       | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:30:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:30:09.554033  445365 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:30:09.554127  445365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:30:09.554130  445365 out.go:304] Setting ErrFile to fd 2...
	I0731 19:30:09.554135  445365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:30:09.554292  445365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:30:09.554900  445365 out.go:298] Setting JSON to false
	I0731 19:30:09.555939  445365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11553,"bootTime":1722442657,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:30:09.555998  445365 start.go:139] virtualization: kvm guest
	I0731 19:30:09.558448  445365 out.go:177] * [cert-options-235206] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:30:09.560212  445365 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:30:09.560202  445365 notify.go:220] Checking for updates...
	I0731 19:30:09.561999  445365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:30:09.563646  445365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:30:09.565285  445365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:30:09.566784  445365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:30:09.568275  445365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:30:09.570193  445365 config.go:182] Loaded profile config "cert-expiration-362350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:09.570283  445365 config.go:182] Loaded profile config "kubernetes-upgrade-916231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 19:30:09.570391  445365 config.go:182] Loaded profile config "pause-693348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:09.570502  445365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:30:09.609426  445365 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:30:09.610794  445365 start.go:297] selected driver: kvm2
	I0731 19:30:09.610800  445365 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:30:09.610810  445365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:30:09.611596  445365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:30:09.611687  445365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:30:09.627437  445365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:30:09.627478  445365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:30:09.627679  445365 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:30:09.627700  445365 cni.go:84] Creating CNI manager for ""
	I0731 19:30:09.627706  445365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:30:09.627714  445365 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:30:09.627764  445365 start.go:340] cluster config:
	{Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0731 19:30:09.627856  445365 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:30:09.629700  445365 out.go:177] * Starting "cert-options-235206" primary control-plane node in "cert-options-235206" cluster
	I0731 19:30:09.630955  445365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:30:09.630984  445365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:30:09.630997  445365 cache.go:56] Caching tarball of preloaded images
	I0731 19:30:09.631075  445365 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:30:09.631086  445365 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:30:09.631184  445365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/config.json ...
	I0731 19:30:09.631197  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/config.json: {Name:mk04b1712094591b36c04ea2524abe609899b0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:09.631320  445365 start.go:360] acquireMachinesLock for cert-options-235206: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:30:09.631344  445365 start.go:364] duration metric: took 15.757µs to acquireMachinesLock for "cert-options-235206"
	I0731 19:30:09.631357  445365 start.go:93] Provisioning new machine with config: &{Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:30:09.631404  445365 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:30:09.633099  445365 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 19:30:09.633279  445365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:30:09.633316  445365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:30:09.648608  445365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0731 19:30:09.649133  445365 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:30:09.649748  445365 main.go:141] libmachine: Using API Version  1
	I0731 19:30:09.649764  445365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:30:09.650166  445365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:30:09.650385  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:09.650517  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:09.650690  445365 start.go:159] libmachine.API.Create for "cert-options-235206" (driver="kvm2")
	I0731 19:30:09.650721  445365 client.go:168] LocalClient.Create starting
	I0731 19:30:09.650761  445365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 19:30:09.650812  445365 main.go:141] libmachine: Decoding PEM data...
	I0731 19:30:09.650835  445365 main.go:141] libmachine: Parsing certificate...
	I0731 19:30:09.650905  445365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 19:30:09.650923  445365 main.go:141] libmachine: Decoding PEM data...
	I0731 19:30:09.650933  445365 main.go:141] libmachine: Parsing certificate...
	I0731 19:30:09.650946  445365 main.go:141] libmachine: Running pre-create checks...
	I0731 19:30:09.650955  445365 main.go:141] libmachine: (cert-options-235206) Calling .PreCreateCheck
	I0731 19:30:09.651354  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetConfigRaw
	I0731 19:30:09.651740  445365 main.go:141] libmachine: Creating machine...
	I0731 19:30:09.651748  445365 main.go:141] libmachine: (cert-options-235206) Calling .Create
	I0731 19:30:09.651919  445365 main.go:141] libmachine: (cert-options-235206) Creating KVM machine...
	I0731 19:30:09.653255  445365 main.go:141] libmachine: (cert-options-235206) DBG | found existing default KVM network
	I0731 19:30:09.654418  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.654264  445387 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:2f:29} reservation:<nil>}
	I0731 19:30:09.655406  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.655328  445387 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:d7:7e} reservation:<nil>}
	I0731 19:30:09.656198  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.656135  445387 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f1:f7:51} reservation:<nil>}
	I0731 19:30:09.658571  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.658427  445387 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 19:30:09.659769  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.659678  445387 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000404c40}
	I0731 19:30:09.659807  445365 main.go:141] libmachine: (cert-options-235206) DBG | created network xml: 
	I0731 19:30:09.659822  445365 main.go:141] libmachine: (cert-options-235206) DBG | <network>
	I0731 19:30:09.659831  445365 main.go:141] libmachine: (cert-options-235206) DBG |   <name>mk-cert-options-235206</name>
	I0731 19:30:09.659837  445365 main.go:141] libmachine: (cert-options-235206) DBG |   <dns enable='no'/>
	I0731 19:30:09.659845  445365 main.go:141] libmachine: (cert-options-235206) DBG |   
	I0731 19:30:09.659859  445365 main.go:141] libmachine: (cert-options-235206) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0731 19:30:09.659865  445365 main.go:141] libmachine: (cert-options-235206) DBG |     <dhcp>
	I0731 19:30:09.659878  445365 main.go:141] libmachine: (cert-options-235206) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0731 19:30:09.659883  445365 main.go:141] libmachine: (cert-options-235206) DBG |     </dhcp>
	I0731 19:30:09.659886  445365 main.go:141] libmachine: (cert-options-235206) DBG |   </ip>
	I0731 19:30:09.659891  445365 main.go:141] libmachine: (cert-options-235206) DBG |   
	I0731 19:30:09.659901  445365 main.go:141] libmachine: (cert-options-235206) DBG | </network>
	I0731 19:30:09.659932  445365 main.go:141] libmachine: (cert-options-235206) DBG | 
	I0731 19:30:09.665980  445365 main.go:141] libmachine: (cert-options-235206) DBG | trying to create private KVM network mk-cert-options-235206 192.168.83.0/24...
	I0731 19:30:09.737171  445365 main.go:141] libmachine: (cert-options-235206) DBG | private KVM network mk-cert-options-235206 192.168.83.0/24 created
	I0731 19:30:09.737218  445365 main.go:141] libmachine: (cert-options-235206) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206 ...
	I0731 19:30:09.737228  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.737149  445387 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:30:09.737241  445365 main.go:141] libmachine: (cert-options-235206) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 19:30:09.738033  445365 main.go:141] libmachine: (cert-options-235206) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 19:30:10.009172  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:10.009041  445387 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa...
	I0731 19:30:10.176448  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:10.176289  445387 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/cert-options-235206.rawdisk...
	I0731 19:30:10.176469  445365 main.go:141] libmachine: (cert-options-235206) DBG | Writing magic tar header
	I0731 19:30:10.176483  445365 main.go:141] libmachine: (cert-options-235206) DBG | Writing SSH key tar header
	I0731 19:30:10.176605  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:10.176493  445387 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206 ...
	I0731 19:30:10.176636  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206
	I0731 19:30:10.176650  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206 (perms=drwx------)
	I0731 19:30:10.176660  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 19:30:10.176686  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:30:10.176695  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:30:10.176703  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 19:30:10.176713  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 19:30:10.176721  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:30:10.176739  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:30:10.176747  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 19:30:10.176758  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:30:10.176765  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:30:10.176798  445365 main.go:141] libmachine: (cert-options-235206) Creating domain...
	I0731 19:30:10.176806  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home
	I0731 19:30:10.176817  445365 main.go:141] libmachine: (cert-options-235206) DBG | Skipping /home - not owner
	I0731 19:30:10.178267  445365 main.go:141] libmachine: (cert-options-235206) define libvirt domain using xml: 
	I0731 19:30:10.178294  445365 main.go:141] libmachine: (cert-options-235206) <domain type='kvm'>
	I0731 19:30:10.178304  445365 main.go:141] libmachine: (cert-options-235206)   <name>cert-options-235206</name>
	I0731 19:30:10.178310  445365 main.go:141] libmachine: (cert-options-235206)   <memory unit='MiB'>2048</memory>
	I0731 19:30:10.178318  445365 main.go:141] libmachine: (cert-options-235206)   <vcpu>2</vcpu>
	I0731 19:30:10.178326  445365 main.go:141] libmachine: (cert-options-235206)   <features>
	I0731 19:30:10.178332  445365 main.go:141] libmachine: (cert-options-235206)     <acpi/>
	I0731 19:30:10.178337  445365 main.go:141] libmachine: (cert-options-235206)     <apic/>
	I0731 19:30:10.178341  445365 main.go:141] libmachine: (cert-options-235206)     <pae/>
	I0731 19:30:10.178346  445365 main.go:141] libmachine: (cert-options-235206)     
	I0731 19:30:10.178350  445365 main.go:141] libmachine: (cert-options-235206)   </features>
	I0731 19:30:10.178356  445365 main.go:141] libmachine: (cert-options-235206)   <cpu mode='host-passthrough'>
	I0731 19:30:10.178360  445365 main.go:141] libmachine: (cert-options-235206)   
	I0731 19:30:10.178370  445365 main.go:141] libmachine: (cert-options-235206)   </cpu>
	I0731 19:30:10.178375  445365 main.go:141] libmachine: (cert-options-235206)   <os>
	I0731 19:30:10.178378  445365 main.go:141] libmachine: (cert-options-235206)     <type>hvm</type>
	I0731 19:30:10.178382  445365 main.go:141] libmachine: (cert-options-235206)     <boot dev='cdrom'/>
	I0731 19:30:10.178385  445365 main.go:141] libmachine: (cert-options-235206)     <boot dev='hd'/>
	I0731 19:30:10.178392  445365 main.go:141] libmachine: (cert-options-235206)     <bootmenu enable='no'/>
	I0731 19:30:10.178396  445365 main.go:141] libmachine: (cert-options-235206)   </os>
	I0731 19:30:10.178413  445365 main.go:141] libmachine: (cert-options-235206)   <devices>
	I0731 19:30:10.178417  445365 main.go:141] libmachine: (cert-options-235206)     <disk type='file' device='cdrom'>
	I0731 19:30:10.178424  445365 main.go:141] libmachine: (cert-options-235206)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/boot2docker.iso'/>
	I0731 19:30:10.178429  445365 main.go:141] libmachine: (cert-options-235206)       <target dev='hdc' bus='scsi'/>
	I0731 19:30:10.178433  445365 main.go:141] libmachine: (cert-options-235206)       <readonly/>
	I0731 19:30:10.178436  445365 main.go:141] libmachine: (cert-options-235206)     </disk>
	I0731 19:30:10.178441  445365 main.go:141] libmachine: (cert-options-235206)     <disk type='file' device='disk'>
	I0731 19:30:10.178449  445365 main.go:141] libmachine: (cert-options-235206)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:30:10.178457  445365 main.go:141] libmachine: (cert-options-235206)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/cert-options-235206.rawdisk'/>
	I0731 19:30:10.178465  445365 main.go:141] libmachine: (cert-options-235206)       <target dev='hda' bus='virtio'/>
	I0731 19:30:10.178474  445365 main.go:141] libmachine: (cert-options-235206)     </disk>
	I0731 19:30:10.178477  445365 main.go:141] libmachine: (cert-options-235206)     <interface type='network'>
	I0731 19:30:10.178482  445365 main.go:141] libmachine: (cert-options-235206)       <source network='mk-cert-options-235206'/>
	I0731 19:30:10.178486  445365 main.go:141] libmachine: (cert-options-235206)       <model type='virtio'/>
	I0731 19:30:10.178490  445365 main.go:141] libmachine: (cert-options-235206)     </interface>
	I0731 19:30:10.178493  445365 main.go:141] libmachine: (cert-options-235206)     <interface type='network'>
	I0731 19:30:10.178498  445365 main.go:141] libmachine: (cert-options-235206)       <source network='default'/>
	I0731 19:30:10.178501  445365 main.go:141] libmachine: (cert-options-235206)       <model type='virtio'/>
	I0731 19:30:10.178505  445365 main.go:141] libmachine: (cert-options-235206)     </interface>
	I0731 19:30:10.178508  445365 main.go:141] libmachine: (cert-options-235206)     <serial type='pty'>
	I0731 19:30:10.178512  445365 main.go:141] libmachine: (cert-options-235206)       <target port='0'/>
	I0731 19:30:10.178515  445365 main.go:141] libmachine: (cert-options-235206)     </serial>
	I0731 19:30:10.178551  445365 main.go:141] libmachine: (cert-options-235206)     <console type='pty'>
	I0731 19:30:10.178569  445365 main.go:141] libmachine: (cert-options-235206)       <target type='serial' port='0'/>
	I0731 19:30:10.178578  445365 main.go:141] libmachine: (cert-options-235206)     </console>
	I0731 19:30:10.178584  445365 main.go:141] libmachine: (cert-options-235206)     <rng model='virtio'>
	I0731 19:30:10.178594  445365 main.go:141] libmachine: (cert-options-235206)       <backend model='random'>/dev/random</backend>
	I0731 19:30:10.178600  445365 main.go:141] libmachine: (cert-options-235206)     </rng>
	I0731 19:30:10.178607  445365 main.go:141] libmachine: (cert-options-235206)     
	I0731 19:30:10.178613  445365 main.go:141] libmachine: (cert-options-235206)     
	I0731 19:30:10.178620  445365 main.go:141] libmachine: (cert-options-235206)   </devices>
	I0731 19:30:10.178625  445365 main.go:141] libmachine: (cert-options-235206) </domain>
	I0731 19:30:10.178638  445365 main.go:141] libmachine: (cert-options-235206) 
	I0731 19:30:10.183421  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:97:58:c9 in network default
	I0731 19:30:10.184037  445365 main.go:141] libmachine: (cert-options-235206) Ensuring networks are active...
	I0731 19:30:10.184066  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:10.184881  445365 main.go:141] libmachine: (cert-options-235206) Ensuring network default is active
	I0731 19:30:10.185209  445365 main.go:141] libmachine: (cert-options-235206) Ensuring network mk-cert-options-235206 is active
	I0731 19:30:10.185702  445365 main.go:141] libmachine: (cert-options-235206) Getting domain xml...
	I0731 19:30:10.186440  445365 main.go:141] libmachine: (cert-options-235206) Creating domain...
	I0731 19:30:11.436559  445365 main.go:141] libmachine: (cert-options-235206) Waiting to get IP...
	I0731 19:30:11.437740  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:11.438185  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:11.438230  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:11.438163  445387 retry.go:31] will retry after 267.133489ms: waiting for machine to come up
	I0731 19:30:11.706829  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:11.707382  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:11.707399  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:11.707342  445387 retry.go:31] will retry after 273.164557ms: waiting for machine to come up
	I0731 19:30:11.981766  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:11.982326  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:11.982341  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:11.982280  445387 retry.go:31] will retry after 297.205185ms: waiting for machine to come up
	I0731 19:30:12.281009  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:12.281509  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:12.281573  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:12.281466  445387 retry.go:31] will retry after 440.374161ms: waiting for machine to come up
	I0731 19:30:12.723178  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:12.723695  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:12.723718  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:12.723634  445387 retry.go:31] will retry after 645.282592ms: waiting for machine to come up
	I0731 19:30:13.370136  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:13.370704  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:13.370723  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:13.370648  445387 retry.go:31] will retry after 840.138457ms: waiting for machine to come up
	I0731 19:30:14.212821  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:14.213300  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:14.213324  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:14.213240  445387 retry.go:31] will retry after 1.17522735s: waiting for machine to come up
	I0731 19:30:13.417337  444999 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355 cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400 d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6 d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1 b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6 ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 533231090afa8f4e8616058b730098c78d46065abff419edc8b28d2ebf494a0c a00f738011bf10471009bcae207a0de27a0ca9582080714febc0d04bf1989516 fb000af18439f1f52a0eb9fb84a52af9284dbcd15cdefc9564ce4d4658a49ba9 9b559cc5dbd72656d5a84056ecbd180294d0c90f44ad7502bef2f0c0f906aee3: (13.531690513s)
	W0731 19:30:13.417432  444999 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355 cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400 d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6 d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1 b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6 ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 533231090afa8f4e8616058b730098c78d46065abff419edc8b28d2ebf494a0c a00f738011bf10471009bcae207a0de27a0ca9582080714febc0d04bf1989516 fb000af18439f1f52a0eb9fb84a52af9284dbcd15cdefc9564ce4d4658a49ba9 9b559cc5dbd72656d5a84056ecbd180294d0c90f44ad7502bef2f0c0f906aee3: Process exited with status 1
	stdout:
	3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355
	cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400
	d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6
	d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1
	b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6
	ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd
	
	stderr:
	E0731 19:30:13.409359    2892 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486\": container with ID starting with 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 not found: ID does not exist" containerID="60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486"
	time="2024-07-31T19:30:13Z" level=fatal msg="stopping the container \"60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486\": rpc error: code = NotFound desc = could not find container \"60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486\": container with ID starting with 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 not found: ID does not exist"
	I0731 19:30:13.417520  444999 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 19:30:13.473335  444999 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:30:13.486805  444999 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 31 19:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 31 19:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 31 19:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 31 19:29 /etc/kubernetes/scheduler.conf
	
	I0731 19:30:13.486907  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:30:13.498420  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:30:13.509870  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:30:13.519692  444999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:30:13.519762  444999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:30:13.529372  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:30:13.538641  444999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:30:13.538718  444999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:30:13.550385  444999 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:30:13.560704  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:13.628802  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.365861  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.602677  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.667574  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.744644  444999 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:30:14.744740  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:15.245670  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:15.745157  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:15.761596  444999 api_server.go:72] duration metric: took 1.016962124s to wait for apiserver process to appear ...
	I0731 19:30:15.761628  444999 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:30:15.761653  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:17.984280  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 19:30:17.984313  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 19:30:17.984326  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:18.036329  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 19:30:18.036384  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 19:30:18.262753  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:18.267507  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 19:30:18.267552  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 19:30:18.762584  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:18.768080  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 19:30:18.768138  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 19:30:19.262725  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:19.267720  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0731 19:30:19.274514  444999 api_server.go:141] control plane version: v1.30.3
	I0731 19:30:19.274541  444999 api_server.go:131] duration metric: took 3.512907565s to wait for apiserver health ...
	I0731 19:30:19.274551  444999 cni.go:84] Creating CNI manager for ""
	I0731 19:30:19.274558  444999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:30:19.276671  444999 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 19:30:15.390860  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:15.391461  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:15.391635  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:15.391409  445387 retry.go:31] will retry after 1.42107697s: waiting for machine to come up
	I0731 19:30:16.814500  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:16.815128  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:16.815150  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:16.815074  445387 retry.go:31] will retry after 1.296362905s: waiting for machine to come up
	I0731 19:30:18.113814  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:18.114391  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:18.114413  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:18.114333  445387 retry.go:31] will retry after 1.980219574s: waiting for machine to come up
	I0731 19:30:19.278235  444999 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 19:30:19.294552  444999 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 19:30:19.316402  444999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:30:19.342508  444999 system_pods.go:59] 6 kube-system pods found
	I0731 19:30:19.342552  444999 system_pods.go:61] "coredns-7db6d8ff4d-6fnsb" [4e0447f5-1a2d-4a88-ab83-14b300b194af] Running
	I0731 19:30:19.342563  444999 system_pods.go:61] "etcd-pause-693348" [2f708161-103d-4a89-8a2d-e005ca7c8f0e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 19:30:19.342586  444999 system_pods.go:61] "kube-apiserver-pause-693348" [58648eb7-c37c-4a8a-9c3a-8221ceeaa9cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 19:30:19.342596  444999 system_pods.go:61] "kube-controller-manager-pause-693348" [396fa766-3c66-46f0-9a62-46d234c2b878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 19:30:19.342602  444999 system_pods.go:61] "kube-proxy-499j6" [5b24930f-7b1b-40d6-ba58-03fa2546d7c9] Running
	I0731 19:30:19.342610  444999 system_pods.go:61] "kube-scheduler-pause-693348" [b8f957ce-1f2b-435d-9c29-f899ab03dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 19:30:19.342619  444999 system_pods.go:74] duration metric: took 26.191086ms to wait for pod list to return data ...
	I0731 19:30:19.342630  444999 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:30:19.349802  444999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:30:19.349835  444999 node_conditions.go:123] node cpu capacity is 2
	I0731 19:30:19.349849  444999 node_conditions.go:105] duration metric: took 7.211749ms to run NodePressure ...
	I0731 19:30:19.349883  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:19.654878  444999 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 19:30:19.660211  444999 kubeadm.go:739] kubelet initialised
	I0731 19:30:19.660231  444999 kubeadm.go:740] duration metric: took 5.325891ms waiting for restarted kubelet to initialise ...
	I0731 19:30:19.660239  444999 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:19.665887  444999 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:19.671210  444999 pod_ready.go:92] pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:19.671232  444999 pod_ready.go:81] duration metric: took 5.313182ms for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:19.671241  444999 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:20.096051  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:20.096593  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:20.096618  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:20.096533  445387 retry.go:31] will retry after 2.40569587s: waiting for machine to come up
	I0731 19:30:22.503909  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:22.504562  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:22.504585  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:22.504501  445387 retry.go:31] will retry after 2.942445364s: waiting for machine to come up
	I0731 19:30:21.677400  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:23.677496  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:25.448207  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:25.448704  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:25.448726  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:25.448644  445387 retry.go:31] will retry after 4.350415899s: waiting for machine to come up
	I0731 19:30:25.678298  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:27.678406  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:28.678352  444999 pod_ready.go:92] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:28.678384  444999 pod_ready.go:81] duration metric: took 9.007134981s for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:28.678396  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:29.800441  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.800990  445365 main.go:141] libmachine: (cert-options-235206) Found IP for machine: 192.168.83.131
	I0731 19:30:29.801015  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has current primary IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.801021  445365 main.go:141] libmachine: (cert-options-235206) Reserving static IP address...
	I0731 19:30:29.801374  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find host DHCP lease matching {name: "cert-options-235206", mac: "52:54:00:ea:6f:ac", ip: "192.168.83.131"} in network mk-cert-options-235206
	I0731 19:30:29.883389  445365 main.go:141] libmachine: (cert-options-235206) DBG | Getting to WaitForSSH function...
	I0731 19:30:29.883406  445365 main.go:141] libmachine: (cert-options-235206) Reserved static IP address: 192.168.83.131
	I0731 19:30:29.883417  445365 main.go:141] libmachine: (cert-options-235206) Waiting for SSH to be available...
	I0731 19:30:29.886258  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.886498  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:29.886519  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.886638  445365 main.go:141] libmachine: (cert-options-235206) DBG | Using SSH client type: external
	I0731 19:30:29.886658  445365 main.go:141] libmachine: (cert-options-235206) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa (-rw-------)
	I0731 19:30:29.886691  445365 main.go:141] libmachine: (cert-options-235206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:30:29.886699  445365 main.go:141] libmachine: (cert-options-235206) DBG | About to run SSH command:
	I0731 19:30:29.886710  445365 main.go:141] libmachine: (cert-options-235206) DBG | exit 0
	I0731 19:30:30.012613  445365 main.go:141] libmachine: (cert-options-235206) DBG | SSH cmd err, output: <nil>: 
	I0731 19:30:30.012932  445365 main.go:141] libmachine: (cert-options-235206) KVM machine creation complete!
	I0731 19:30:30.013272  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetConfigRaw
	I0731 19:30:30.013838  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:30.014061  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:30.014220  445365 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:30:30.014235  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetState
	I0731 19:30:30.015464  445365 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:30:30.015482  445365 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:30:30.015487  445365 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:30:30.015495  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.017735  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.018058  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.018079  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.018213  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.018435  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.018610  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.018790  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.018961  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.019172  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.019180  445365 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:30:30.128088  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:30:30.128102  445365 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:30:30.128112  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.131297  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.131730  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.131752  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.131972  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.132185  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.132425  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.132602  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.132811  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.132989  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.132995  445365 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:30:30.245744  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:30:30.245917  445365 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:30:30.245927  445365 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:30:30.245935  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:30.246239  445365 buildroot.go:166] provisioning hostname "cert-options-235206"
	I0731 19:30:30.246253  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:30.246498  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.249662  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.250097  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.250121  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.250313  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.250503  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.250651  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.250854  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.251001  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.251230  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.251238  445365 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-235206 && echo "cert-options-235206" | sudo tee /etc/hostname
	I0731 19:30:30.373209  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-235206
	
	I0731 19:30:30.373227  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.376165  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.376577  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.376600  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.376819  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.377013  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.377163  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.377265  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.377434  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.377606  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.377616  445365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-235206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-235206/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-235206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:30:30.494988  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:30:30.495009  445365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:30:30.495032  445365 buildroot.go:174] setting up certificates
	I0731 19:30:30.495046  445365 provision.go:84] configureAuth start
	I0731 19:30:30.495058  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:30.495371  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:30.498222  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.498627  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.498649  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.498847  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.501100  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.501406  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.501437  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.501551  445365 provision.go:143] copyHostCerts
	I0731 19:30:30.501621  445365 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:30:30.501627  445365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:30:30.501689  445365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:30:30.501786  445365 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:30:30.501790  445365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:30:30.501812  445365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:30:30.501887  445365 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:30:30.501890  445365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:30:30.501909  445365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:30:30.501964  445365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.cert-options-235206 san=[127.0.0.1 192.168.83.131 cert-options-235206 localhost minikube]
	I0731 19:30:30.610211  445365 provision.go:177] copyRemoteCerts
	I0731 19:30:30.610267  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:30:30.610293  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.613212  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.613613  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.613632  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.613861  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.614057  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.614240  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.614459  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:30.704610  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0731 19:30:30.731319  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:30:30.756432  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 19:30:30.782427  445365 provision.go:87] duration metric: took 287.366086ms to configureAuth
	I0731 19:30:30.782448  445365 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:30:30.782690  445365 config.go:182] Loaded profile config "cert-options-235206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:30.782768  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.786189  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.786528  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.786563  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.786722  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.786986  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.787190  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.787367  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.787571  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.787827  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.787842  445365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:30:31.068104  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:30:31.068125  445365 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:30:31.068134  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetURL
	I0731 19:30:31.069635  445365 main.go:141] libmachine: (cert-options-235206) DBG | Using libvirt version 6000000
	I0731 19:30:31.072097  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.072445  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.072467  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.072609  445365 main.go:141] libmachine: Docker is up and running!
	I0731 19:30:31.072619  445365 main.go:141] libmachine: Reticulating splines...
	I0731 19:30:31.072625  445365 client.go:171] duration metric: took 21.421898266s to LocalClient.Create
	I0731 19:30:31.072668  445365 start.go:167] duration metric: took 21.42196397s to libmachine.API.Create "cert-options-235206"
	I0731 19:30:31.072676  445365 start.go:293] postStartSetup for "cert-options-235206" (driver="kvm2")
	I0731 19:30:31.072687  445365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:30:31.072704  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.072990  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:30:31.073007  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.075136  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.075428  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.075448  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.075648  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.075835  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.075980  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.076149  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:31.160193  445365 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:30:31.164368  445365 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:30:31.164407  445365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:30:31.164480  445365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:30:31.164561  445365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:30:31.164658  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:30:31.174214  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:30:31.201068  445365 start.go:296] duration metric: took 128.37662ms for postStartSetup
	I0731 19:30:31.201125  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetConfigRaw
	I0731 19:30:31.201795  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:31.204789  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.205141  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.205159  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.205401  445365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/config.json ...
	I0731 19:30:31.205582  445365 start.go:128] duration metric: took 21.574169362s to createHost
	I0731 19:30:31.205599  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.208041  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.208422  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.208443  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.208608  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.208792  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.208946  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.209084  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.209221  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:31.209392  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:31.209397  445365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:30:31.325330  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454231.295084941
	
	I0731 19:30:31.325346  445365 fix.go:216] guest clock: 1722454231.295084941
	I0731 19:30:31.325352  445365 fix.go:229] Guest: 2024-07-31 19:30:31.295084941 +0000 UTC Remote: 2024-07-31 19:30:31.205587263 +0000 UTC m=+21.688036733 (delta=89.497678ms)
	I0731 19:30:31.325380  445365 fix.go:200] guest clock delta is within tolerance: 89.497678ms
	I0731 19:30:31.325388  445365 start.go:83] releasing machines lock for "cert-options-235206", held for 21.694038682s
	I0731 19:30:31.325404  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.325689  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:31.328431  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.328849  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.328904  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.329047  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.329608  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.329750  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.329828  445365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:30:31.329854  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.329944  445365 ssh_runner.go:195] Run: cat /version.json
	I0731 19:30:31.329957  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.332713  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.332913  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.333056  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.333071  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.333195  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.333331  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.333337  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.333347  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.333493  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.333509  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.333643  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.333651  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:31.333768  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.333901  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:31.418285  445365 ssh_runner.go:195] Run: systemctl --version
	I0731 19:30:31.444795  445365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:30:31.605971  445365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:30:31.613256  445365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:30:31.613319  445365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:30:31.629683  445365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:30:31.629700  445365 start.go:495] detecting cgroup driver to use...
	I0731 19:30:31.629782  445365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:30:31.646731  445365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:30:31.662539  445365 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:30:31.662598  445365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:30:31.677993  445365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:30:31.695339  445365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:30:31.826867  445365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:30:31.987460  445365 docker.go:233] disabling docker service ...
	I0731 19:30:31.987524  445365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:30:32.002319  445365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:30:32.016257  445365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:30:32.136724  445365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:30:32.253918  445365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:30:32.267828  445365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:30:32.287270  445365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:30:32.287317  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.297573  445365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:30:32.297625  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.307953  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.317966  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.328639  445365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:30:32.339594  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.350344  445365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.367732  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.378104  445365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:30:32.387353  445365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:30:32.387405  445365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:30:32.399806  445365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:30:32.409238  445365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:30:32.525701  445365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:30:32.682886  445365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:30:32.682964  445365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:30:32.688511  445365 start.go:563] Will wait 60s for crictl version
	I0731 19:30:32.688568  445365 ssh_runner.go:195] Run: which crictl
	I0731 19:30:32.692463  445365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:30:32.732031  445365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:30:32.732124  445365 ssh_runner.go:195] Run: crio --version
	I0731 19:30:32.764525  445365 ssh_runner.go:195] Run: crio --version
	I0731 19:30:32.805679  445365 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:30:30.685845  444999 pod_ready.go:102] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:33.186539  444999 pod_ready.go:102] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:33.691674  444999 pod_ready.go:92] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:33.691698  444999 pod_ready.go:81] duration metric: took 5.013294318s for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.691711  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.697738  444999 pod_ready.go:92] pod "kube-controller-manager-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:33.697765  444999 pod_ready.go:81] duration metric: took 6.044943ms for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.697778  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.702596  444999 pod_ready.go:92] pod "kube-proxy-499j6" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:33.702624  444999 pod_ready.go:81] duration metric: took 4.837737ms for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.702636  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.211929  444999 pod_ready.go:92] pod "kube-scheduler-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:34.211958  444999 pod_ready.go:81] duration metric: took 509.313243ms for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.211968  444999 pod_ready.go:38] duration metric: took 14.551720121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:34.211996  444999 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:30:34.230048  444999 ops.go:34] apiserver oom_adj: -16
	I0731 19:30:34.230076  444999 kubeadm.go:597] duration metric: took 34.438051393s to restartPrimaryControlPlane
	I0731 19:30:34.230087  444999 kubeadm.go:394] duration metric: took 34.578892558s to StartCluster
	I0731 19:30:34.230111  444999 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:34.230207  444999 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:30:34.231596  444999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:34.231914  444999 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:30:34.232146  444999 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 19:30:34.232367  444999 config.go:182] Loaded profile config "pause-693348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:34.233918  444999 out.go:177] * Enabled addons: 
	I0731 19:30:34.233927  444999 out.go:177] * Verifying Kubernetes components...
	I0731 19:30:32.806977  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:32.809743  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:32.810027  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:32.810044  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:32.810355  445365 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0731 19:30:32.814796  445365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:30:32.827656  445365 kubeadm.go:883] updating cluster {Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.131 Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:30:32.827762  445365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:30:32.827804  445365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:30:32.865615  445365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 19:30:32.865668  445365 ssh_runner.go:195] Run: which lz4
	I0731 19:30:32.869966  445365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:30:32.874203  445365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:30:32.874231  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 19:30:34.319994  445365 crio.go:462] duration metric: took 1.45008048s to copy over tarball
	I0731 19:30:34.320089  445365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:30:34.235295  444999 addons.go:510] duration metric: took 3.148477ms for enable addons: enabled=[]
	I0731 19:30:34.235423  444999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:30:34.420421  444999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:30:34.435416  444999 node_ready.go:35] waiting up to 6m0s for node "pause-693348" to be "Ready" ...
	I0731 19:30:34.440168  444999 node_ready.go:49] node "pause-693348" has status "Ready":"True"
	I0731 19:30:34.440205  444999 node_ready.go:38] duration metric: took 4.747037ms for node "pause-693348" to be "Ready" ...
	I0731 19:30:34.440220  444999 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:34.446191  444999 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.485356  444999 pod_ready.go:92] pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:34.485381  444999 pod_ready.go:81] duration metric: took 39.155185ms for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.485394  444999 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.884349  444999 pod_ready.go:92] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:34.884405  444999 pod_ready.go:81] duration metric: took 399.000924ms for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.884422  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.283356  444999 pod_ready.go:92] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:35.283384  444999 pod_ready.go:81] duration metric: took 398.954086ms for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.283393  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.683368  444999 pod_ready.go:92] pod "kube-controller-manager-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:35.683399  444999 pod_ready.go:81] duration metric: took 399.998844ms for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.683410  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.083510  444999 pod_ready.go:92] pod "kube-proxy-499j6" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:36.083541  444999 pod_ready.go:81] duration metric: took 400.125086ms for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.083552  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.483707  444999 pod_ready.go:92] pod "kube-scheduler-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:36.483746  444999 pod_ready.go:81] duration metric: took 400.183759ms for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.483757  444999 pod_ready.go:38] duration metric: took 2.043522081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:36.483783  444999 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:30:36.483854  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:36.500797  444999 api_server.go:72] duration metric: took 2.268834929s to wait for apiserver process to appear ...
	I0731 19:30:36.500843  444999 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:30:36.500871  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:36.506805  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0731 19:30:36.509137  444999 api_server.go:141] control plane version: v1.30.3
	I0731 19:30:36.509160  444999 api_server.go:131] duration metric: took 8.309276ms to wait for apiserver health ...
	I0731 19:30:36.509169  444999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:30:36.685299  444999 system_pods.go:59] 6 kube-system pods found
	I0731 19:30:36.685330  444999 system_pods.go:61] "coredns-7db6d8ff4d-6fnsb" [4e0447f5-1a2d-4a88-ab83-14b300b194af] Running
	I0731 19:30:36.685335  444999 system_pods.go:61] "etcd-pause-693348" [2f708161-103d-4a89-8a2d-e005ca7c8f0e] Running
	I0731 19:30:36.685340  444999 system_pods.go:61] "kube-apiserver-pause-693348" [58648eb7-c37c-4a8a-9c3a-8221ceeaa9cf] Running
	I0731 19:30:36.685345  444999 system_pods.go:61] "kube-controller-manager-pause-693348" [396fa766-3c66-46f0-9a62-46d234c2b878] Running
	I0731 19:30:36.685350  444999 system_pods.go:61] "kube-proxy-499j6" [5b24930f-7b1b-40d6-ba58-03fa2546d7c9] Running
	I0731 19:30:36.685354  444999 system_pods.go:61] "kube-scheduler-pause-693348" [b8f957ce-1f2b-435d-9c29-f899ab03dcf1] Running
	I0731 19:30:36.685363  444999 system_pods.go:74] duration metric: took 176.186483ms to wait for pod list to return data ...
	I0731 19:30:36.685371  444999 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:30:36.883693  444999 default_sa.go:45] found service account: "default"
	I0731 19:30:36.883730  444999 default_sa.go:55] duration metric: took 198.350935ms for default service account to be created ...
	I0731 19:30:36.883749  444999 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:30:37.085375  444999 system_pods.go:86] 6 kube-system pods found
	I0731 19:30:37.085406  444999 system_pods.go:89] "coredns-7db6d8ff4d-6fnsb" [4e0447f5-1a2d-4a88-ab83-14b300b194af] Running
	I0731 19:30:37.085412  444999 system_pods.go:89] "etcd-pause-693348" [2f708161-103d-4a89-8a2d-e005ca7c8f0e] Running
	I0731 19:30:37.085416  444999 system_pods.go:89] "kube-apiserver-pause-693348" [58648eb7-c37c-4a8a-9c3a-8221ceeaa9cf] Running
	I0731 19:30:37.085424  444999 system_pods.go:89] "kube-controller-manager-pause-693348" [396fa766-3c66-46f0-9a62-46d234c2b878] Running
	I0731 19:30:37.085427  444999 system_pods.go:89] "kube-proxy-499j6" [5b24930f-7b1b-40d6-ba58-03fa2546d7c9] Running
	I0731 19:30:37.085434  444999 system_pods.go:89] "kube-scheduler-pause-693348" [b8f957ce-1f2b-435d-9c29-f899ab03dcf1] Running
	I0731 19:30:37.085444  444999 system_pods.go:126] duration metric: took 201.687898ms to wait for k8s-apps to be running ...
	I0731 19:30:37.085453  444999 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:30:37.085510  444999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:30:37.101665  444999 system_svc.go:56] duration metric: took 16.200153ms WaitForService to wait for kubelet
	I0731 19:30:37.101693  444999 kubeadm.go:582] duration metric: took 2.869738288s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:30:37.101712  444999 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:30:37.282284  444999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:30:37.282309  444999 node_conditions.go:123] node cpu capacity is 2
	I0731 19:30:37.282320  444999 node_conditions.go:105] duration metric: took 180.603263ms to run NodePressure ...
	I0731 19:30:37.282335  444999 start.go:241] waiting for startup goroutines ...
	I0731 19:30:37.282345  444999 start.go:246] waiting for cluster config update ...
	I0731 19:30:37.282355  444999 start.go:255] writing updated cluster config ...
	I0731 19:30:37.282719  444999 ssh_runner.go:195] Run: rm -f paused
	I0731 19:30:37.336748  444999 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 19:30:37.339774  444999 out.go:177] * Done! kubectl is now configured to use "pause-693348" cluster and "default" namespace by default
	I0731 19:30:37.616231  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:30:37.616541  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:30:37.616565  441565 kubeadm.go:310] 
	I0731 19:30:37.616618  441565 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 19:30:37.616682  441565 kubeadm.go:310] 		timed out waiting for the condition
	I0731 19:30:37.616691  441565 kubeadm.go:310] 
	I0731 19:30:37.616732  441565 kubeadm.go:310] 	This error is likely caused by:
	I0731 19:30:37.616774  441565 kubeadm.go:310] 		- The kubelet is not running
	I0731 19:30:37.616907  441565 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 19:30:37.616921  441565 kubeadm.go:310] 
	I0731 19:30:37.617009  441565 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 19:30:37.617054  441565 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 19:30:37.617101  441565 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 19:30:37.617111  441565 kubeadm.go:310] 
	I0731 19:30:37.617237  441565 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 19:30:37.617340  441565 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 19:30:37.617347  441565 kubeadm.go:310] 
	I0731 19:30:37.617480  441565 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 19:30:37.617592  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 19:30:37.617688  441565 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 19:30:37.617779  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 19:30:37.617786  441565 kubeadm.go:310] 
	I0731 19:30:37.618674  441565 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:30:37.618783  441565 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 19:30:37.618878  441565 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 19:30:37.618954  441565 kubeadm.go:394] duration metric: took 3m56.904666471s to StartCluster
	I0731 19:30:37.619032  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 19:30:37.619098  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 19:30:37.673896  441565 cri.go:89] found id: ""
	I0731 19:30:37.673924  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.673934  441565 logs.go:278] No container was found matching "kube-apiserver"
	I0731 19:30:37.673942  441565 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 19:30:37.674013  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 19:30:37.722228  441565 cri.go:89] found id: ""
	I0731 19:30:37.722267  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.722279  441565 logs.go:278] No container was found matching "etcd"
	I0731 19:30:37.722291  441565 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 19:30:37.722363  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 19:30:37.773267  441565 cri.go:89] found id: ""
	I0731 19:30:37.773296  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.773307  441565 logs.go:278] No container was found matching "coredns"
	I0731 19:30:37.773314  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 19:30:37.773381  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 19:30:37.813679  441565 cri.go:89] found id: ""
	I0731 19:30:37.813716  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.813728  441565 logs.go:278] No container was found matching "kube-scheduler"
	I0731 19:30:37.813737  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 19:30:37.813804  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 19:30:37.850740  441565 cri.go:89] found id: ""
	I0731 19:30:37.850769  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.850778  441565 logs.go:278] No container was found matching "kube-proxy"
	I0731 19:30:37.850785  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 19:30:37.850839  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 19:30:37.891443  441565 cri.go:89] found id: ""
	I0731 19:30:37.891474  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.891484  441565 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 19:30:37.891491  441565 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 19:30:37.891558  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 19:30:37.932204  441565 cri.go:89] found id: ""
	I0731 19:30:37.932248  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.932261  441565 logs.go:278] No container was found matching "kindnet"
	I0731 19:30:37.932277  441565 logs.go:123] Gathering logs for kubelet ...
	I0731 19:30:37.932296  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 19:30:37.992472  441565 logs.go:123] Gathering logs for dmesg ...
	I0731 19:30:37.992512  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 19:30:38.008005  441565 logs.go:123] Gathering logs for describe nodes ...
	I0731 19:30:38.008043  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 19:30:38.155717  441565 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 19:30:38.155747  441565 logs.go:123] Gathering logs for CRI-O ...
	I0731 19:30:38.155764  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 19:30:38.269491  441565 logs.go:123] Gathering logs for container status ...
	I0731 19:30:38.269537  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 19:30:38.320851  441565 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 19:30:38.320918  441565 out.go:239] * 
	W0731 19:30:38.320985  441565 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:30:38.321016  441565 out.go:239] * 
	W0731 19:30:38.322107  441565 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	
	==> CRI-O <==
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.076643278Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454239076618560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68a019b7-d961-4c3a-a19c-99f3e3c9c0cb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.077916542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5706a64-f990-4058-8907-2240f20a9229 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.077993284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5706a64-f990-4058-8907-2240f20a9229 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.078276864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5706a64-f990-4058-8907-2240f20a9229 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.127784319Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69b68a06-acdd-4feb-a289-a35ed72d8ea0 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.127879703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69b68a06-acdd-4feb-a289-a35ed72d8ea0 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.129042857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea9f87f9-46aa-46c8-a050-5b3463b6628e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.129449808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454239129424135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea9f87f9-46aa-46c8-a050-5b3463b6628e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.130237628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc9e79ea-bf0c-48a1-ae5e-effc1a4d0bc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.130304741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc9e79ea-bf0c-48a1-ae5e-effc1a4d0bc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.130642359Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc9e79ea-bf0c-48a1-ae5e-effc1a4d0bc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.176188684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11fc738d-0892-4666-9ed7-578ff864d6de name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.176288633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11fc738d-0892-4666-9ed7-578ff864d6de name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.177618741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b83c68f1-9ba3-454e-a1a2-9d9d9de66f9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.179284547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454239179157203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b83c68f1-9ba3-454e-a1a2-9d9d9de66f9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.180117290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=242f4c39-04ad-437b-aa97-e1e24c61fbfa name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.180229324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=242f4c39-04ad-437b-aa97-e1e24c61fbfa name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.181447447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=242f4c39-04ad-437b-aa97-e1e24c61fbfa name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.244927832Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d05a88dd-9485-48ac-b8e1-0c9bc7089f58 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.245020045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d05a88dd-9485-48ac-b8e1-0c9bc7089f58 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.248016802Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43614003-ef2e-41a0-8855-b81b6af55b2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.248386879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454239248360332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43614003-ef2e-41a0-8855-b81b6af55b2a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.249137044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eede4f4d-e3b2-4837-bdeb-77e754b3e140 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.249237950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eede4f4d-e3b2-4837-bdeb-77e754b3e140 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:39 pause-693348 crio[2237]: time="2024-07-31 19:30:39.249594202Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eede4f4d-e3b2-4837-bdeb-77e754b3e140 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0ae92c2db4abd       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   24 seconds ago       Running             kube-controller-manager   2                   fee5a6505a942       kube-controller-manager-pause-693348
	e564c6c56ae5e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   24 seconds ago       Running             kube-apiserver            2                   26afaaccaa0b4       kube-apiserver-pause-693348
	51a4f1b908a69       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago       Running             etcd                      2                   16bbce28ff478       etcd-pause-693348
	5126ebeb8388e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   24 seconds ago       Running             kube-scheduler            2                   9f57ae7ca7629       kube-scheduler-pause-693348
	26cf1de849324       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   26 seconds ago       Running             kube-proxy                2                   1d6f6f86a3f53       kube-proxy-499j6
	5d2270e9794f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago       Running             coredns                   1                   b52b33f7796d2       coredns-7db6d8ff4d-6fnsb
	3387ae84fb48c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   40 seconds ago       Exited              kube-proxy                1                   1d6f6f86a3f53       kube-proxy-499j6
	cc3011c3828ca       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   40 seconds ago       Exited              kube-controller-manager   1                   fee5a6505a942       kube-controller-manager-pause-693348
	d9774e8fdb6ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   40 seconds ago       Exited              kube-scheduler            1                   9f57ae7ca7629       kube-scheduler-pause-693348
	d7142c59b8ee8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   40 seconds ago       Exited              etcd                      1                   16bbce28ff478       etcd-pause-693348
	b9c5fe953bfb4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   40 seconds ago       Exited              kube-apiserver            1                   26afaaccaa0b4       kube-apiserver-pause-693348
	ea0bc030f2a1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   22d45af9ffc46       coredns-7db6d8ff4d-6fnsb
	
	
	==> coredns [5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58512 - 1748 "HINFO IN 6215870457684000845.2670301909281061572. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013386942s
	
	
	==> coredns [ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54066 - 61204 "HINFO IN 3775059066234186191.3492531731191115626. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015422943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-693348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-693348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=pause-693348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_29_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:29:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-693348
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:30:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    pause-693348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 734249b4712b4b8db606fe29bdae397b
	  System UUID:                734249b4-712b-4b8d-b606-fe29bdae397b
	  Boot ID:                    71787efe-f098-430f-878c-7b3fc264d21c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6fnsb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 etcd-pause-693348                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kube-apiserver-pause-693348             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-controller-manager-pause-693348    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-499j6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-pause-693348             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-693348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  92s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-693348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-693348 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeReady                91s                kubelet          Node pause-693348 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node pause-693348 event: Registered Node pause-693348 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-693348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-693348 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-693348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-693348 event: Registered Node pause-693348 in Controller
	
	
	==> dmesg <==
	[  +9.507494] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064726] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.177902] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.134755] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.263307] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.386881] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.062460] kauditd_printk_skb: 130 callbacks suppressed
	[Jul31 19:29] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.741942] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.797274] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.082555] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.910325] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[  +0.133254] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.959931] kauditd_printk_skb: 69 callbacks suppressed
	[ +21.164703] systemd-fstab-generator[2154]: Ignoring "noauto" option for root device
	[  +0.152768] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.181943] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.168784] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.279043] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +2.184708] systemd-fstab-generator[2348]: Ignoring "noauto" option for root device
	[Jul31 19:30] kauditd_printk_skb: 195 callbacks suppressed
	[ +13.878511] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +3.853479] kauditd_printk_skb: 39 callbacks suppressed
	[ +15.962927] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	
	
	==> etcd [51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8] <==
	{"level":"info","ts":"2024-07-31T19:30:15.978859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:15.978886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgPreVoteResp from 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:15.978899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.978905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgVoteResp from 5245f38ecce3eccc at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.978917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became leader at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.978924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5245f38ecce3eccc elected leader 5245f38ecce3eccc at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.98601Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5245f38ecce3eccc","local-member-attributes":"{Name:pause-693348 ClientURLs:[https://192.168.39.242:2379]}","request-path":"/0/members/5245f38ecce3eccc/attributes","cluster-id":"9dd55050173e419e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:30:15.986098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:15.986513Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:15.997708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:30:15.999333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.242:2379"}
	{"level":"info","ts":"2024-07-31T19:30:16.005709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:16.005756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:38.754527Z","caller":"traceutil/trace.go:171","msg":"trace[1958914525] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"258.644722ms","start":"2024-07-31T19:30:38.495852Z","end":"2024-07-31T19:30:38.754497Z","steps":["trace[1958914525] 'process raft request'  (duration: 258.512289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:30:39.300926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.446541ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17063172561502444639 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" mod_revision:443 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T19:30:39.301081Z","caller":"traceutil/trace.go:171","msg":"trace[566640949] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"502.75952ms","start":"2024-07-31T19:30:38.7983Z","end":"2024-07-31T19:30:39.30106Z","steps":["trace[566640949] 'read index received'  (duration: 375.29408ms)","trace[566640949] 'applied index is now lower than readState.Index'  (duration: 127.463407ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:30:39.301232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"502.919475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-07-31T19:30:39.301309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.374156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.242\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-31T19:30:39.301335Z","caller":"traceutil/trace.go:171","msg":"trace[1535991554] range","detail":"{range_begin:/registry/masterleases/192.168.39.242; range_end:; response_count:1; response_revision:457; }","duration":"144.428247ms","start":"2024-07-31T19:30:39.156899Z","end":"2024-07-31T19:30:39.301327Z","steps":["trace[1535991554] 'agreement among raft nodes before linearized reading'  (duration: 144.378375ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:30:39.301387Z","caller":"traceutil/trace.go:171","msg":"trace[1158760084] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:457; }","duration":"503.002648ms","start":"2024-07-31T19:30:38.79827Z","end":"2024-07-31T19:30:39.301273Z","steps":["trace[1158760084] 'agreement among raft nodes before linearized reading'  (duration: 502.878103ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:30:39.301429Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:30:38.798255Z","time spent":"503.16169ms","remote":"127.0.0.1:56546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-31T19:30:39.301642Z","caller":"traceutil/trace.go:171","msg":"trace[1077860937] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"652.840504ms","start":"2024-07-31T19:30:38.648788Z","end":"2024-07-31T19:30:39.301629Z","steps":["trace[1077860937] 'process raft request'  (duration: 524.943792ms)","trace[1077860937] 'compare'  (duration: 126.317448ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:30:39.301779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:30:38.648769Z","time spent":"652.963002ms","remote":"127.0.0.1:56820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" mod_revision:443 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" > >"}
	{"level":"warn","ts":"2024-07-31T19:30:39.622639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.843413ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17063172561502444644 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6ccc910a458b9c63>","response":"size:40"}
	{"level":"warn","ts":"2024-07-31T19:30:39.622781Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:30:39.303788Z","time spent":"318.990448ms","remote":"127.0.0.1:56584","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1] <==
	{"level":"info","ts":"2024-07-31T19:29:59.905333Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2024-07-31T19:30:01.56542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T19:30:01.565588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:30:01.565646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgPreVoteResp from 5245f38ecce3eccc at term 2"}
	{"level":"info","ts":"2024-07-31T19:30:01.565784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.565852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgVoteResp from 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.565897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became leader at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.565935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5245f38ecce3eccc elected leader 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.570092Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5245f38ecce3eccc","local-member-attributes":"{Name:pause-693348 ClientURLs:[https://192.168.39.242:2379]}","request-path":"/0/members/5245f38ecce3eccc/attributes","cluster-id":"9dd55050173e419e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:30:01.570341Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:01.570548Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:01.571099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:01.571165Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:01.573832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:30:01.573849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.242:2379"}
	{"level":"info","ts":"2024-07-31T19:30:03.163501Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T19:30:03.164051Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-693348","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.242:2380"],"advertise-client-urls":["https://192.168.39.242:2379"]}
	{"level":"warn","ts":"2024-07-31T19:30:03.164234Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:30:03.164372Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:30:03.180842Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:30:03.180897Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.242:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:30:03.182845Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5245f38ecce3eccc","current-leader-member-id":"5245f38ecce3eccc"}
	{"level":"info","ts":"2024-07-31T19:30:03.194783Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2024-07-31T19:30:03.194956Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2024-07-31T19:30:03.194982Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-693348","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.242:2380"],"advertise-client-urls":["https://192.168.39.242:2379"]}
	
	
	==> kernel <==
	 19:30:40 up 2 min,  0 users,  load average: 0.65, 0.26, 0.09
	Linux pause-693348 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6] <==
	W0731 19:30:12.422600       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.438038       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.472998       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.493085       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.517215       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.526245       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.592385       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.616949       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.666110       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.674152       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.674346       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.712754       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.787865       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.797243       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.822604       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.889074       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.967948       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.046930       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.053983       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.072622       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.084127       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.133583       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.144761       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.261111       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.296701       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b] <==
	I0731 19:30:18.079214       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:30:18.079397       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:30:18.079445       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:30:18.099197       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:30:18.141128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:30:18.142392       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 19:30:18.143038       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:30:18.143050       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 19:30:18.143063       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:30:18.148914       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:30:18.157152       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 19:30:18.171818       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:30:18.180048       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:30:18.949150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:30:19.502888       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 19:30:19.520718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 19:30:19.574549       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:30:19.615002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:30:19.622034       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:30:30.985412       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:30:31.084904       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:30:39.302476       1 trace.go:236] Trace[1199425645]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:aae71824-2c9d-40b3-9866-206b0ee763af,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-mnjmpho2cnxz7dbg2ti6x722vq,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-mnjmpho2cnxz7dbg2ti6x722vq,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PUT (31-Jul-2024 19:30:38.645) (total time: 657ms):
	Trace[1199425645]: ["GuaranteedUpdate etcd3" audit-id:aae71824-2c9d-40b3-9866-206b0ee763af,key:/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq,type:*coordination.Lease,resource:leases.coordination.k8s.io 656ms (19:30:38.645)
	Trace[1199425645]:  ---"Txn call completed" 654ms (19:30:39.302)]
	Trace[1199425645]: [657.048463ms] [657.048463ms] END
	
	
	==> kube-controller-manager [0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa] <==
	I0731 19:30:30.783732       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0731 19:30:30.790813       1 shared_informer.go:320] Caches are synced for node
	I0731 19:30:30.790932       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0731 19:30:30.790973       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 19:30:30.791031       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 19:30:30.791064       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 19:30:30.792792       1 shared_informer.go:320] Caches are synced for service account
	I0731 19:30:30.793991       1 shared_informer.go:320] Caches are synced for deployment
	I0731 19:30:30.795956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 19:30:30.798313       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 19:30:30.799599       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0731 19:30:30.809293       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 19:30:30.810629       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 19:30:30.814056       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 19:30:30.837359       1 shared_informer.go:320] Caches are synced for HPA
	I0731 19:30:30.842037       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 19:30:30.931253       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 19:30:30.947163       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 19:30:30.959054       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 19:30:30.970981       1 shared_informer.go:320] Caches are synced for job
	I0731 19:30:30.981824       1 shared_informer.go:320] Caches are synced for cronjob
	I0731 19:30:30.992187       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 19:30:31.426902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:30:31.467994       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:30:31.468085       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400] <==
	
	
	==> kube-proxy [26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f] <==
	I0731 19:30:13.449554       1 server_linux.go:69] "Using iptables proxy"
	E0731 19:30:13.459492       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-693348\": dial tcp 192.168.39.242:8443: connect: connection refused"
	E0731 19:30:14.600258       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-693348\": dial tcp 192.168.39.242:8443: connect: connection refused"
	I0731 19:30:18.103087       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.242"]
	I0731 19:30:18.175429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:30:18.175560       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:30:18.175686       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:30:18.180029       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:30:18.180613       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:30:18.180641       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:18.182569       1 config.go:192] "Starting service config controller"
	I0731 19:30:18.182627       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:30:18.182994       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:30:18.183019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:30:18.184273       1 config.go:319] "Starting node config controller"
	I0731 19:30:18.184300       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:30:18.283298       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:30:18.283281       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:30:18.284328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355] <==
	
	
	==> kube-scheduler [5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7] <==
	I0731 19:30:16.847459       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:30:17.985465       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:30:17.985563       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:30:17.985602       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:30:17.985625       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:30:18.066218       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:30:18.066299       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:18.067779       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:30:18.067881       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:30:18.069968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:30:18.067951       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:30:18.170541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6] <==
	I0731 19:30:00.394790       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:30:02.993086       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:30:02.993130       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:30:02.993139       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:30:02.993145       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:30:03.040750       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:30:03.040793       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:03.046490       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:30:03.046707       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:30:03.046747       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:30:03.046831       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:30:03.046908       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0731 19:30:03.047112       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0731 19:30:03.050776       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 19:30:14 pause-693348 kubelet[3178]: I0731 19:30:14.969272    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03df059a6706ede11dc1ab46cc2fd86f-ca-certs\") pod \"kube-controller-manager-pause-693348\" (UID: \"03df059a6706ede11dc1ab46cc2fd86f\") " pod="kube-system/kube-controller-manager-pause-693348"
	Jul 31 19:30:14 pause-693348 kubelet[3178]: I0731 19:30:14.969294    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03df059a6706ede11dc1ab46cc2fd86f-flexvolume-dir\") pod \"kube-controller-manager-pause-693348\" (UID: \"03df059a6706ede11dc1ab46cc2fd86f\") " pod="kube-system/kube-controller-manager-pause-693348"
	Jul 31 19:30:14 pause-693348 kubelet[3178]: I0731 19:30:14.969342    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03df059a6706ede11dc1ab46cc2fd86f-kubeconfig\") pod \"kube-controller-manager-pause-693348\" (UID: \"03df059a6706ede11dc1ab46cc2fd86f\") " pod="kube-system/kube-controller-manager-pause-693348"
	Jul 31 19:30:14 pause-693348 kubelet[3178]: E0731 19:30:14.976479    3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-693348?timeout=10s\": dial tcp 192.168.39.242:8443: connect: connection refused" interval="400ms"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.066645    3178 kubelet_node_status.go:73] "Attempting to register node" node="pause-693348"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: E0731 19:30:15.067573    3178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.242:8443: connect: connection refused" node="pause-693348"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.193348    3178 scope.go:117] "RemoveContainer" containerID="d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.195575    3178 scope.go:117] "RemoveContainer" containerID="d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.197768    3178 scope.go:117] "RemoveContainer" containerID="b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.200084    3178 scope.go:117] "RemoveContainer" containerID="cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: E0731 19:30:15.377528    3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-693348?timeout=10s\": dial tcp 192.168.39.242:8443: connect: connection refused" interval="800ms"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.471241    3178 kubelet_node_status.go:73] "Attempting to register node" node="pause-693348"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: E0731 19:30:15.472635    3178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.242:8443: connect: connection refused" node="pause-693348"
	Jul 31 19:30:16 pause-693348 kubelet[3178]: I0731 19:30:16.275029    3178 kubelet_node_status.go:73] "Attempting to register node" node="pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.195188    3178 kubelet_node_status.go:112] "Node was previously registered" node="pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.195289    3178 kubelet_node_status.go:76] "Successfully registered node" node="pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.198176    3178 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.199600    3178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: E0731 19:30:18.631300    3178 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-pause-693348\" already exists" pod="kube-system/etcd-pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.739249    3178 apiserver.go:52] "Watching apiserver"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.743631    3178 topology_manager.go:215] "Topology Admit Handler" podUID="4e0447f5-1a2d-4a88-ab83-14b300b194af" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6fnsb"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.744081    3178 topology_manager.go:215] "Topology Admit Handler" podUID="5b24930f-7b1b-40d6-ba58-03fa2546d7c9" podNamespace="kube-system" podName="kube-proxy-499j6"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.763228    3178 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.843184    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b24930f-7b1b-40d6-ba58-03fa2546d7c9-lib-modules\") pod \"kube-proxy-499j6\" (UID: \"5b24930f-7b1b-40d6-ba58-03fa2546d7c9\") " pod="kube-system/kube-proxy-499j6"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.843253    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b24930f-7b1b-40d6-ba58-03fa2546d7c9-xtables-lock\") pod \"kube-proxy-499j6\" (UID: \"5b24930f-7b1b-40d6-ba58-03fa2546d7c9\") " pod="kube-system/kube-proxy-499j6"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-693348 -n pause-693348
helpers_test.go:261: (dbg) Run:  kubectl --context pause-693348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-693348 -n pause-693348
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-693348 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-693348 logs -n 25: (1.427831694s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p offline-crio-954897             | offline-crio-954897       | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC | 31 Jul 24 19:25 UTC |
	| start   | -p kubernetes-upgrade-916231       | kubernetes-upgrade-916231 | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-114834        | force-systemd-env-114834  | jenkins | v1.33.1 | 31 Jul 24 19:25 UTC | 31 Jul 24 19:25 UTC |
	| start   | -p stopped-upgrade-096992          | minikube                  | jenkins | v1.26.0 | 31 Jul 24 19:25 UTC | 31 Jul 24 19:27 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:27 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-043979          | running-upgrade-043979    | jenkins | v1.33.1 | 31 Jul 24 19:26 UTC | 31 Jul 24 19:28 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:27 UTC |
	| start   | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:27 UTC |
	|         | --no-kubernetes --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-096992 stop        | minikube                  | jenkins | v1.26.0 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:27 UTC |
	| start   | -p stopped-upgrade-096992          | stopped-upgrade-096992    | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC | 31 Jul 24 19:28 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-978325 sudo        | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:27 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-043979          | running-upgrade-043979    | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p pause-693348 --memory=2048      | pause-693348              | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:29 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-096992          | stopped-upgrade-096992    | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p cert-expiration-362350          | cert-expiration-362350    | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:29 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-978325 sudo        | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-978325             | NoKubernetes-978325       | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:28 UTC |
	| start   | -p force-systemd-flag-748014       | force-systemd-flag-748014 | jenkins | v1.33.1 | 31 Jul 24 19:28 UTC | 31 Jul 24 19:30 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-693348                    | pause-693348              | jenkins | v1.33.1 | 31 Jul 24 19:29 UTC | 31 Jul 24 19:30 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-748014 ssh cat  | force-systemd-flag-748014 | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC | 31 Jul 24 19:30 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-748014       | force-systemd-flag-748014 | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC | 31 Jul 24 19:30 UTC |
	| start   | -p cert-options-235206             | cert-options-235206       | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-916231       | kubernetes-upgrade-916231 | jenkins | v1.33.1 | 31 Jul 24 19:30 UTC |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 19:30:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 19:30:09.554033  445365 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:30:09.554127  445365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:30:09.554130  445365 out.go:304] Setting ErrFile to fd 2...
	I0731 19:30:09.554135  445365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:30:09.554292  445365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:30:09.554900  445365 out.go:298] Setting JSON to false
	I0731 19:30:09.555939  445365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11553,"bootTime":1722442657,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:30:09.555998  445365 start.go:139] virtualization: kvm guest
	I0731 19:30:09.558448  445365 out.go:177] * [cert-options-235206] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:30:09.560212  445365 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:30:09.560202  445365 notify.go:220] Checking for updates...
	I0731 19:30:09.561999  445365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:30:09.563646  445365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:30:09.565285  445365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:30:09.566784  445365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:30:09.568275  445365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:30:09.570193  445365 config.go:182] Loaded profile config "cert-expiration-362350": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:09.570283  445365 config.go:182] Loaded profile config "kubernetes-upgrade-916231": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 19:30:09.570391  445365 config.go:182] Loaded profile config "pause-693348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:09.570502  445365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:30:09.609426  445365 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:30:09.610794  445365 start.go:297] selected driver: kvm2
	I0731 19:30:09.610800  445365 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:30:09.610810  445365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:30:09.611596  445365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:30:09.611687  445365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 19:30:09.627437  445365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 19:30:09.627478  445365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 19:30:09.627679  445365 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 19:30:09.627700  445365 cni.go:84] Creating CNI manager for ""
	I0731 19:30:09.627706  445365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:30:09.627714  445365 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 19:30:09.627764  445365 start.go:340] cluster config:
	{Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0731 19:30:09.627856  445365 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 19:30:09.629700  445365 out.go:177] * Starting "cert-options-235206" primary control-plane node in "cert-options-235206" cluster
	I0731 19:30:09.630955  445365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:30:09.630984  445365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 19:30:09.630997  445365 cache.go:56] Caching tarball of preloaded images
	I0731 19:30:09.631075  445365 preload.go:172] Found /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 19:30:09.631086  445365 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 19:30:09.631184  445365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/config.json ...
	I0731 19:30:09.631197  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/config.json: {Name:mk04b1712094591b36c04ea2524abe609899b0bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:09.631320  445365 start.go:360] acquireMachinesLock for cert-options-235206: {Name:mkf36d71418ddf471c5bf7b692c41623d576c7b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 19:30:09.631344  445365 start.go:364] duration metric: took 15.757µs to acquireMachinesLock for "cert-options-235206"
	I0731 19:30:09.631357  445365 start.go:93] Provisioning new machine with config: &{Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:30:09.631404  445365 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 19:30:09.633099  445365 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0731 19:30:09.633279  445365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:30:09.633316  445365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:30:09.648608  445365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34747
	I0731 19:30:09.649133  445365 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:30:09.649748  445365 main.go:141] libmachine: Using API Version  1
	I0731 19:30:09.649764  445365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:30:09.650166  445365 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:30:09.650385  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:09.650517  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:09.650690  445365 start.go:159] libmachine.API.Create for "cert-options-235206" (driver="kvm2")
	I0731 19:30:09.650721  445365 client.go:168] LocalClient.Create starting
	I0731 19:30:09.650761  445365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem
	I0731 19:30:09.650812  445365 main.go:141] libmachine: Decoding PEM data...
	I0731 19:30:09.650835  445365 main.go:141] libmachine: Parsing certificate...
	I0731 19:30:09.650905  445365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem
	I0731 19:30:09.650923  445365 main.go:141] libmachine: Decoding PEM data...
	I0731 19:30:09.650933  445365 main.go:141] libmachine: Parsing certificate...
	I0731 19:30:09.650946  445365 main.go:141] libmachine: Running pre-create checks...
	I0731 19:30:09.650955  445365 main.go:141] libmachine: (cert-options-235206) Calling .PreCreateCheck
	I0731 19:30:09.651354  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetConfigRaw
	I0731 19:30:09.651740  445365 main.go:141] libmachine: Creating machine...
	I0731 19:30:09.651748  445365 main.go:141] libmachine: (cert-options-235206) Calling .Create
	I0731 19:30:09.651919  445365 main.go:141] libmachine: (cert-options-235206) Creating KVM machine...
	I0731 19:30:09.653255  445365 main.go:141] libmachine: (cert-options-235206) DBG | found existing default KVM network
	I0731 19:30:09.654418  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.654264  445387 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:2f:29} reservation:<nil>}
	I0731 19:30:09.655406  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.655328  445387 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:b3:d7:7e} reservation:<nil>}
	I0731 19:30:09.656198  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.656135  445387 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f1:f7:51} reservation:<nil>}
	I0731 19:30:09.658571  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.658427  445387 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0731 19:30:09.659769  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.659678  445387 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000404c40}
	I0731 19:30:09.659807  445365 main.go:141] libmachine: (cert-options-235206) DBG | created network xml: 
	I0731 19:30:09.659822  445365 main.go:141] libmachine: (cert-options-235206) DBG | <network>
	I0731 19:30:09.659831  445365 main.go:141] libmachine: (cert-options-235206) DBG |   <name>mk-cert-options-235206</name>
	I0731 19:30:09.659837  445365 main.go:141] libmachine: (cert-options-235206) DBG |   <dns enable='no'/>
	I0731 19:30:09.659845  445365 main.go:141] libmachine: (cert-options-235206) DBG |   
	I0731 19:30:09.659859  445365 main.go:141] libmachine: (cert-options-235206) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0731 19:30:09.659865  445365 main.go:141] libmachine: (cert-options-235206) DBG |     <dhcp>
	I0731 19:30:09.659878  445365 main.go:141] libmachine: (cert-options-235206) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0731 19:30:09.659883  445365 main.go:141] libmachine: (cert-options-235206) DBG |     </dhcp>
	I0731 19:30:09.659886  445365 main.go:141] libmachine: (cert-options-235206) DBG |   </ip>
	I0731 19:30:09.659891  445365 main.go:141] libmachine: (cert-options-235206) DBG |   
	I0731 19:30:09.659901  445365 main.go:141] libmachine: (cert-options-235206) DBG | </network>
	I0731 19:30:09.659932  445365 main.go:141] libmachine: (cert-options-235206) DBG | 
	I0731 19:30:09.665980  445365 main.go:141] libmachine: (cert-options-235206) DBG | trying to create private KVM network mk-cert-options-235206 192.168.83.0/24...
	I0731 19:30:09.737171  445365 main.go:141] libmachine: (cert-options-235206) DBG | private KVM network mk-cert-options-235206 192.168.83.0/24 created
	I0731 19:30:09.737218  445365 main.go:141] libmachine: (cert-options-235206) Setting up store path in /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206 ...
	I0731 19:30:09.737228  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:09.737149  445387 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:30:09.737241  445365 main.go:141] libmachine: (cert-options-235206) Building disk image from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 19:30:09.738033  445365 main.go:141] libmachine: (cert-options-235206) Downloading /home/jenkins/minikube-integration/19356-395032/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 19:30:10.009172  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:10.009041  445387 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa...
	I0731 19:30:10.176448  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:10.176289  445387 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/cert-options-235206.rawdisk...
	I0731 19:30:10.176469  445365 main.go:141] libmachine: (cert-options-235206) DBG | Writing magic tar header
	I0731 19:30:10.176483  445365 main.go:141] libmachine: (cert-options-235206) DBG | Writing SSH key tar header
	I0731 19:30:10.176605  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:10.176493  445387 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206 ...
	I0731 19:30:10.176636  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206
	I0731 19:30:10.176650  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206 (perms=drwx------)
	I0731 19:30:10.176660  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube/machines
	I0731 19:30:10.176686  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:30:10.176695  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube/machines (perms=drwxr-xr-x)
	I0731 19:30:10.176703  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032/.minikube (perms=drwxr-xr-x)
	I0731 19:30:10.176713  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration/19356-395032 (perms=drwxrwxr-x)
	I0731 19:30:10.176721  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 19:30:10.176739  445365 main.go:141] libmachine: (cert-options-235206) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 19:30:10.176747  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19356-395032
	I0731 19:30:10.176758  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 19:30:10.176765  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home/jenkins
	I0731 19:30:10.176798  445365 main.go:141] libmachine: (cert-options-235206) Creating domain...
	I0731 19:30:10.176806  445365 main.go:141] libmachine: (cert-options-235206) DBG | Checking permissions on dir: /home
	I0731 19:30:10.176817  445365 main.go:141] libmachine: (cert-options-235206) DBG | Skipping /home - not owner
	I0731 19:30:10.178267  445365 main.go:141] libmachine: (cert-options-235206) define libvirt domain using xml: 
	I0731 19:30:10.178294  445365 main.go:141] libmachine: (cert-options-235206) <domain type='kvm'>
	I0731 19:30:10.178304  445365 main.go:141] libmachine: (cert-options-235206)   <name>cert-options-235206</name>
	I0731 19:30:10.178310  445365 main.go:141] libmachine: (cert-options-235206)   <memory unit='MiB'>2048</memory>
	I0731 19:30:10.178318  445365 main.go:141] libmachine: (cert-options-235206)   <vcpu>2</vcpu>
	I0731 19:30:10.178326  445365 main.go:141] libmachine: (cert-options-235206)   <features>
	I0731 19:30:10.178332  445365 main.go:141] libmachine: (cert-options-235206)     <acpi/>
	I0731 19:30:10.178337  445365 main.go:141] libmachine: (cert-options-235206)     <apic/>
	I0731 19:30:10.178341  445365 main.go:141] libmachine: (cert-options-235206)     <pae/>
	I0731 19:30:10.178346  445365 main.go:141] libmachine: (cert-options-235206)     
	I0731 19:30:10.178350  445365 main.go:141] libmachine: (cert-options-235206)   </features>
	I0731 19:30:10.178356  445365 main.go:141] libmachine: (cert-options-235206)   <cpu mode='host-passthrough'>
	I0731 19:30:10.178360  445365 main.go:141] libmachine: (cert-options-235206)   
	I0731 19:30:10.178370  445365 main.go:141] libmachine: (cert-options-235206)   </cpu>
	I0731 19:30:10.178375  445365 main.go:141] libmachine: (cert-options-235206)   <os>
	I0731 19:30:10.178378  445365 main.go:141] libmachine: (cert-options-235206)     <type>hvm</type>
	I0731 19:30:10.178382  445365 main.go:141] libmachine: (cert-options-235206)     <boot dev='cdrom'/>
	I0731 19:30:10.178385  445365 main.go:141] libmachine: (cert-options-235206)     <boot dev='hd'/>
	I0731 19:30:10.178392  445365 main.go:141] libmachine: (cert-options-235206)     <bootmenu enable='no'/>
	I0731 19:30:10.178396  445365 main.go:141] libmachine: (cert-options-235206)   </os>
	I0731 19:30:10.178413  445365 main.go:141] libmachine: (cert-options-235206)   <devices>
	I0731 19:30:10.178417  445365 main.go:141] libmachine: (cert-options-235206)     <disk type='file' device='cdrom'>
	I0731 19:30:10.178424  445365 main.go:141] libmachine: (cert-options-235206)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/boot2docker.iso'/>
	I0731 19:30:10.178429  445365 main.go:141] libmachine: (cert-options-235206)       <target dev='hdc' bus='scsi'/>
	I0731 19:30:10.178433  445365 main.go:141] libmachine: (cert-options-235206)       <readonly/>
	I0731 19:30:10.178436  445365 main.go:141] libmachine: (cert-options-235206)     </disk>
	I0731 19:30:10.178441  445365 main.go:141] libmachine: (cert-options-235206)     <disk type='file' device='disk'>
	I0731 19:30:10.178449  445365 main.go:141] libmachine: (cert-options-235206)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 19:30:10.178457  445365 main.go:141] libmachine: (cert-options-235206)       <source file='/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/cert-options-235206.rawdisk'/>
	I0731 19:30:10.178465  445365 main.go:141] libmachine: (cert-options-235206)       <target dev='hda' bus='virtio'/>
	I0731 19:30:10.178474  445365 main.go:141] libmachine: (cert-options-235206)     </disk>
	I0731 19:30:10.178477  445365 main.go:141] libmachine: (cert-options-235206)     <interface type='network'>
	I0731 19:30:10.178482  445365 main.go:141] libmachine: (cert-options-235206)       <source network='mk-cert-options-235206'/>
	I0731 19:30:10.178486  445365 main.go:141] libmachine: (cert-options-235206)       <model type='virtio'/>
	I0731 19:30:10.178490  445365 main.go:141] libmachine: (cert-options-235206)     </interface>
	I0731 19:30:10.178493  445365 main.go:141] libmachine: (cert-options-235206)     <interface type='network'>
	I0731 19:30:10.178498  445365 main.go:141] libmachine: (cert-options-235206)       <source network='default'/>
	I0731 19:30:10.178501  445365 main.go:141] libmachine: (cert-options-235206)       <model type='virtio'/>
	I0731 19:30:10.178505  445365 main.go:141] libmachine: (cert-options-235206)     </interface>
	I0731 19:30:10.178508  445365 main.go:141] libmachine: (cert-options-235206)     <serial type='pty'>
	I0731 19:30:10.178512  445365 main.go:141] libmachine: (cert-options-235206)       <target port='0'/>
	I0731 19:30:10.178515  445365 main.go:141] libmachine: (cert-options-235206)     </serial>
	I0731 19:30:10.178551  445365 main.go:141] libmachine: (cert-options-235206)     <console type='pty'>
	I0731 19:30:10.178569  445365 main.go:141] libmachine: (cert-options-235206)       <target type='serial' port='0'/>
	I0731 19:30:10.178578  445365 main.go:141] libmachine: (cert-options-235206)     </console>
	I0731 19:30:10.178584  445365 main.go:141] libmachine: (cert-options-235206)     <rng model='virtio'>
	I0731 19:30:10.178594  445365 main.go:141] libmachine: (cert-options-235206)       <backend model='random'>/dev/random</backend>
	I0731 19:30:10.178600  445365 main.go:141] libmachine: (cert-options-235206)     </rng>
	I0731 19:30:10.178607  445365 main.go:141] libmachine: (cert-options-235206)     
	I0731 19:30:10.178613  445365 main.go:141] libmachine: (cert-options-235206)     
	I0731 19:30:10.178620  445365 main.go:141] libmachine: (cert-options-235206)   </devices>
	I0731 19:30:10.178625  445365 main.go:141] libmachine: (cert-options-235206) </domain>
	I0731 19:30:10.178638  445365 main.go:141] libmachine: (cert-options-235206) 
	I0731 19:30:10.183421  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:97:58:c9 in network default
	I0731 19:30:10.184037  445365 main.go:141] libmachine: (cert-options-235206) Ensuring networks are active...
	I0731 19:30:10.184066  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:10.184881  445365 main.go:141] libmachine: (cert-options-235206) Ensuring network default is active
	I0731 19:30:10.185209  445365 main.go:141] libmachine: (cert-options-235206) Ensuring network mk-cert-options-235206 is active
	I0731 19:30:10.185702  445365 main.go:141] libmachine: (cert-options-235206) Getting domain xml...
	I0731 19:30:10.186440  445365 main.go:141] libmachine: (cert-options-235206) Creating domain...
	I0731 19:30:11.436559  445365 main.go:141] libmachine: (cert-options-235206) Waiting to get IP...
	I0731 19:30:11.437740  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:11.438185  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:11.438230  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:11.438163  445387 retry.go:31] will retry after 267.133489ms: waiting for machine to come up
	I0731 19:30:11.706829  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:11.707382  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:11.707399  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:11.707342  445387 retry.go:31] will retry after 273.164557ms: waiting for machine to come up
	I0731 19:30:11.981766  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:11.982326  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:11.982341  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:11.982280  445387 retry.go:31] will retry after 297.205185ms: waiting for machine to come up
	I0731 19:30:12.281009  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:12.281509  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:12.281573  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:12.281466  445387 retry.go:31] will retry after 440.374161ms: waiting for machine to come up
	I0731 19:30:12.723178  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:12.723695  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:12.723718  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:12.723634  445387 retry.go:31] will retry after 645.282592ms: waiting for machine to come up
	I0731 19:30:13.370136  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:13.370704  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:13.370723  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:13.370648  445387 retry.go:31] will retry after 840.138457ms: waiting for machine to come up
	I0731 19:30:14.212821  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:14.213300  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:14.213324  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:14.213240  445387 retry.go:31] will retry after 1.17522735s: waiting for machine to come up
	I0731 19:30:13.417337  444999 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355 cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400 d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6 d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1 b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6 ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 533231090afa8f4e8616058b730098c78d46065abff419edc8b28d2ebf494a0c a00f738011bf10471009bcae207a0de27a0ca9582080714febc0d04bf1989516 fb000af18439f1f52a0eb9fb84a52af9284dbcd15cdefc9564ce4d4658a49ba9 9b559cc5dbd72656d5a84056ecbd180294d0c90f44ad7502bef2f0c0f906aee3: (13.531690513s)
	W0731 19:30:13.417432  444999 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355 cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400 d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6 d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1 b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6 ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 533231090afa8f4e8616058b730098c78d46065abff419edc8b28d2ebf494a0c a00f738011bf10471009bcae207a0de27a0ca9582080714febc0d04bf1989516 fb000af18439f1f52a0eb9fb84a52af9284dbcd15cdefc9564ce4d4658a49ba9 9b559cc5dbd72656d5a84056ecbd180294d0c90f44ad7502bef2f0c0f906aee3: Process exited with status 1
	stdout:
	3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355
	cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400
	d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6
	d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1
	b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6
	ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd
	
	stderr:
	E0731 19:30:13.409359    2892 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486\": container with ID starting with 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 not found: ID does not exist" containerID="60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486"
	time="2024-07-31T19:30:13Z" level=fatal msg="stopping the container \"60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486\": rpc error: code = NotFound desc = could not find container \"60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486\": container with ID starting with 60b11084d9c63af08e16c782487d634750e922a58ff2b44e95acdaf5aeb76486 not found: ID does not exist"
	I0731 19:30:13.417520  444999 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 19:30:13.473335  444999 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:30:13.486805  444999 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 31 19:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 31 19:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 31 19:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 31 19:29 /etc/kubernetes/scheduler.conf
	
	I0731 19:30:13.486907  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 19:30:13.498420  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 19:30:13.509870  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 19:30:13.519692  444999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:30:13.519762  444999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:30:13.529372  444999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 19:30:13.538641  444999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:30:13.538718  444999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:30:13.550385  444999 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:30:13.560704  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:13.628802  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.365861  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.602677  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.667574  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:14.744644  444999 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:30:14.744740  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:15.245670  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:15.745157  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:15.761596  444999 api_server.go:72] duration metric: took 1.016962124s to wait for apiserver process to appear ...
	I0731 19:30:15.761628  444999 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:30:15.761653  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:17.984280  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 19:30:17.984313  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 19:30:17.984326  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:18.036329  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 19:30:18.036384  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 19:30:18.262753  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:18.267507  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 19:30:18.267552  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 19:30:18.762584  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:18.768080  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 19:30:18.768138  444999 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 19:30:19.262725  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:19.267720  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0731 19:30:19.274514  444999 api_server.go:141] control plane version: v1.30.3
	I0731 19:30:19.274541  444999 api_server.go:131] duration metric: took 3.512907565s to wait for apiserver health ...
	I0731 19:30:19.274551  444999 cni.go:84] Creating CNI manager for ""
	I0731 19:30:19.274558  444999 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:30:19.276671  444999 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 19:30:15.390860  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:15.391461  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:15.391635  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:15.391409  445387 retry.go:31] will retry after 1.42107697s: waiting for machine to come up
	I0731 19:30:16.814500  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:16.815128  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:16.815150  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:16.815074  445387 retry.go:31] will retry after 1.296362905s: waiting for machine to come up
	I0731 19:30:18.113814  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:18.114391  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:18.114413  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:18.114333  445387 retry.go:31] will retry after 1.980219574s: waiting for machine to come up
	I0731 19:30:19.278235  444999 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 19:30:19.294552  444999 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 19:30:19.316402  444999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:30:19.342508  444999 system_pods.go:59] 6 kube-system pods found
	I0731 19:30:19.342552  444999 system_pods.go:61] "coredns-7db6d8ff4d-6fnsb" [4e0447f5-1a2d-4a88-ab83-14b300b194af] Running
	I0731 19:30:19.342563  444999 system_pods.go:61] "etcd-pause-693348" [2f708161-103d-4a89-8a2d-e005ca7c8f0e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 19:30:19.342586  444999 system_pods.go:61] "kube-apiserver-pause-693348" [58648eb7-c37c-4a8a-9c3a-8221ceeaa9cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 19:30:19.342596  444999 system_pods.go:61] "kube-controller-manager-pause-693348" [396fa766-3c66-46f0-9a62-46d234c2b878] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 19:30:19.342602  444999 system_pods.go:61] "kube-proxy-499j6" [5b24930f-7b1b-40d6-ba58-03fa2546d7c9] Running
	I0731 19:30:19.342610  444999 system_pods.go:61] "kube-scheduler-pause-693348" [b8f957ce-1f2b-435d-9c29-f899ab03dcf1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 19:30:19.342619  444999 system_pods.go:74] duration metric: took 26.191086ms to wait for pod list to return data ...
	I0731 19:30:19.342630  444999 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:30:19.349802  444999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:30:19.349835  444999 node_conditions.go:123] node cpu capacity is 2
	I0731 19:30:19.349849  444999 node_conditions.go:105] duration metric: took 7.211749ms to run NodePressure ...
	I0731 19:30:19.349883  444999 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 19:30:19.654878  444999 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 19:30:19.660211  444999 kubeadm.go:739] kubelet initialised
	I0731 19:30:19.660231  444999 kubeadm.go:740] duration metric: took 5.325891ms waiting for restarted kubelet to initialise ...
	I0731 19:30:19.660239  444999 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:19.665887  444999 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:19.671210  444999 pod_ready.go:92] pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:19.671232  444999 pod_ready.go:81] duration metric: took 5.313182ms for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:19.671241  444999 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:20.096051  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:20.096593  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:20.096618  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:20.096533  445387 retry.go:31] will retry after 2.40569587s: waiting for machine to come up
	I0731 19:30:22.503909  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:22.504562  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:22.504585  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:22.504501  445387 retry.go:31] will retry after 2.942445364s: waiting for machine to come up
	I0731 19:30:21.677400  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:23.677496  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:25.448207  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:25.448704  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find current IP address of domain cert-options-235206 in network mk-cert-options-235206
	I0731 19:30:25.448726  445365 main.go:141] libmachine: (cert-options-235206) DBG | I0731 19:30:25.448644  445387 retry.go:31] will retry after 4.350415899s: waiting for machine to come up
	I0731 19:30:25.678298  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:27.678406  444999 pod_ready.go:102] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:28.678352  444999 pod_ready.go:92] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:28.678384  444999 pod_ready.go:81] duration metric: took 9.007134981s for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:28.678396  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:29.800441  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.800990  445365 main.go:141] libmachine: (cert-options-235206) Found IP for machine: 192.168.83.131
	I0731 19:30:29.801015  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has current primary IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.801021  445365 main.go:141] libmachine: (cert-options-235206) Reserving static IP address...
	I0731 19:30:29.801374  445365 main.go:141] libmachine: (cert-options-235206) DBG | unable to find host DHCP lease matching {name: "cert-options-235206", mac: "52:54:00:ea:6f:ac", ip: "192.168.83.131"} in network mk-cert-options-235206
	I0731 19:30:29.883389  445365 main.go:141] libmachine: (cert-options-235206) DBG | Getting to WaitForSSH function...
	I0731 19:30:29.883406  445365 main.go:141] libmachine: (cert-options-235206) Reserved static IP address: 192.168.83.131
	I0731 19:30:29.883417  445365 main.go:141] libmachine: (cert-options-235206) Waiting for SSH to be available...
	I0731 19:30:29.886258  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.886498  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:29.886519  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:29.886638  445365 main.go:141] libmachine: (cert-options-235206) DBG | Using SSH client type: external
	I0731 19:30:29.886658  445365 main.go:141] libmachine: (cert-options-235206) DBG | Using SSH private key: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa (-rw-------)
	I0731 19:30:29.886691  445365 main.go:141] libmachine: (cert-options-235206) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.131 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 19:30:29.886699  445365 main.go:141] libmachine: (cert-options-235206) DBG | About to run SSH command:
	I0731 19:30:29.886710  445365 main.go:141] libmachine: (cert-options-235206) DBG | exit 0
	I0731 19:30:30.012613  445365 main.go:141] libmachine: (cert-options-235206) DBG | SSH cmd err, output: <nil>: 
	I0731 19:30:30.012932  445365 main.go:141] libmachine: (cert-options-235206) KVM machine creation complete!
	I0731 19:30:30.013272  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetConfigRaw
	I0731 19:30:30.013838  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:30.014061  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:30.014220  445365 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 19:30:30.014235  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetState
	I0731 19:30:30.015464  445365 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 19:30:30.015482  445365 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 19:30:30.015487  445365 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 19:30:30.015495  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.017735  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.018058  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.018079  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.018213  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.018435  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.018610  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.018790  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.018961  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.019172  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.019180  445365 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 19:30:30.128088  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:30:30.128102  445365 main.go:141] libmachine: Detecting the provisioner...
	I0731 19:30:30.128112  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.131297  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.131730  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.131752  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.131972  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.132185  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.132425  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.132602  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.132811  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.132989  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.132995  445365 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 19:30:30.245744  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 19:30:30.245917  445365 main.go:141] libmachine: found compatible host: buildroot
	I0731 19:30:30.245927  445365 main.go:141] libmachine: Provisioning with buildroot...
	I0731 19:30:30.245935  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:30.246239  445365 buildroot.go:166] provisioning hostname "cert-options-235206"
	I0731 19:30:30.246253  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:30.246498  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.249662  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.250097  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.250121  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.250313  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.250503  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.250651  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.250854  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.251001  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.251230  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.251238  445365 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-options-235206 && echo "cert-options-235206" | sudo tee /etc/hostname
	I0731 19:30:30.373209  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-options-235206
	
	I0731 19:30:30.373227  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.376165  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.376577  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.376600  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.376819  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.377013  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.377163  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.377265  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.377434  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.377606  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.377616  445365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-options-235206' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-options-235206/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-options-235206' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 19:30:30.494988  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 19:30:30.495009  445365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19356-395032/.minikube CaCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19356-395032/.minikube}
	I0731 19:30:30.495032  445365 buildroot.go:174] setting up certificates
	I0731 19:30:30.495046  445365 provision.go:84] configureAuth start
	I0731 19:30:30.495058  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetMachineName
	I0731 19:30:30.495371  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:30.498222  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.498627  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.498649  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.498847  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.501100  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.501406  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.501437  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.501551  445365 provision.go:143] copyHostCerts
	I0731 19:30:30.501621  445365 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem, removing ...
	I0731 19:30:30.501627  445365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem
	I0731 19:30:30.501689  445365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/ca.pem (1082 bytes)
	I0731 19:30:30.501786  445365 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem, removing ...
	I0731 19:30:30.501790  445365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem
	I0731 19:30:30.501812  445365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/cert.pem (1123 bytes)
	I0731 19:30:30.501887  445365 exec_runner.go:144] found /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem, removing ...
	I0731 19:30:30.501890  445365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem
	I0731 19:30:30.501909  445365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19356-395032/.minikube/key.pem (1675 bytes)
	I0731 19:30:30.501964  445365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem org=jenkins.cert-options-235206 san=[127.0.0.1 192.168.83.131 cert-options-235206 localhost minikube]
	I0731 19:30:30.610211  445365 provision.go:177] copyRemoteCerts
	I0731 19:30:30.610267  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 19:30:30.610293  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.613212  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.613613  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.613632  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.613861  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.614057  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.614240  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.614459  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:30.704610  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0731 19:30:30.731319  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0731 19:30:30.756432  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 19:30:30.782427  445365 provision.go:87] duration metric: took 287.366086ms to configureAuth
	I0731 19:30:30.782448  445365 buildroot.go:189] setting minikube options for container-runtime
	I0731 19:30:30.782690  445365 config.go:182] Loaded profile config "cert-options-235206": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:30.782768  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:30.786189  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.786528  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:30.786563  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:30.786722  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:30.786986  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.787190  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:30.787367  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:30.787571  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:30.787827  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:30.787842  445365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 19:30:31.068104  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 19:30:31.068125  445365 main.go:141] libmachine: Checking connection to Docker...
	I0731 19:30:31.068134  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetURL
	I0731 19:30:31.069635  445365 main.go:141] libmachine: (cert-options-235206) DBG | Using libvirt version 6000000
	I0731 19:30:31.072097  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.072445  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.072467  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.072609  445365 main.go:141] libmachine: Docker is up and running!
	I0731 19:30:31.072619  445365 main.go:141] libmachine: Reticulating splines...
	I0731 19:30:31.072625  445365 client.go:171] duration metric: took 21.421898266s to LocalClient.Create
	I0731 19:30:31.072668  445365 start.go:167] duration metric: took 21.42196397s to libmachine.API.Create "cert-options-235206"
	I0731 19:30:31.072676  445365 start.go:293] postStartSetup for "cert-options-235206" (driver="kvm2")
	I0731 19:30:31.072687  445365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 19:30:31.072704  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.072990  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 19:30:31.073007  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.075136  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.075428  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.075448  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.075648  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.075835  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.075980  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.076149  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:31.160193  445365 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 19:30:31.164368  445365 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 19:30:31.164407  445365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/addons for local assets ...
	I0731 19:30:31.164480  445365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19356-395032/.minikube/files for local assets ...
	I0731 19:30:31.164561  445365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem -> 4023132.pem in /etc/ssl/certs
	I0731 19:30:31.164658  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 19:30:31.174214  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:30:31.201068  445365 start.go:296] duration metric: took 128.37662ms for postStartSetup
	I0731 19:30:31.201125  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetConfigRaw
	I0731 19:30:31.201795  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:31.204789  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.205141  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.205159  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.205401  445365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/config.json ...
	I0731 19:30:31.205582  445365 start.go:128] duration metric: took 21.574169362s to createHost
	I0731 19:30:31.205599  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.208041  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.208422  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.208443  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.208608  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.208792  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.208946  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.209084  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.209221  445365 main.go:141] libmachine: Using SSH client type: native
	I0731 19:30:31.209392  445365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.131 22 <nil> <nil>}
	I0731 19:30:31.209397  445365 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 19:30:31.325330  445365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722454231.295084941
	
	I0731 19:30:31.325346  445365 fix.go:216] guest clock: 1722454231.295084941
	I0731 19:30:31.325352  445365 fix.go:229] Guest: 2024-07-31 19:30:31.295084941 +0000 UTC Remote: 2024-07-31 19:30:31.205587263 +0000 UTC m=+21.688036733 (delta=89.497678ms)
	I0731 19:30:31.325380  445365 fix.go:200] guest clock delta is within tolerance: 89.497678ms
	I0731 19:30:31.325388  445365 start.go:83] releasing machines lock for "cert-options-235206", held for 21.694038682s
	I0731 19:30:31.325404  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.325689  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:31.328431  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.328849  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.328904  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.329047  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.329608  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.329750  445365 main.go:141] libmachine: (cert-options-235206) Calling .DriverName
	I0731 19:30:31.329828  445365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 19:30:31.329854  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.329944  445365 ssh_runner.go:195] Run: cat /version.json
	I0731 19:30:31.329957  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHHostname
	I0731 19:30:31.332713  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.332913  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.333056  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.333071  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.333195  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.333331  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:31.333337  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.333347  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:31.333493  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.333509  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHPort
	I0731 19:30:31.333643  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHKeyPath
	I0731 19:30:31.333651  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:31.333768  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetSSHUsername
	I0731 19:30:31.333901  445365 sshutil.go:53] new ssh client: &{IP:192.168.83.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/cert-options-235206/id_rsa Username:docker}
	I0731 19:30:31.418285  445365 ssh_runner.go:195] Run: systemctl --version
	I0731 19:30:31.444795  445365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 19:30:31.605971  445365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 19:30:31.613256  445365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 19:30:31.613319  445365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 19:30:31.629683  445365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 19:30:31.629700  445365 start.go:495] detecting cgroup driver to use...
	I0731 19:30:31.629782  445365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 19:30:31.646731  445365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 19:30:31.662539  445365 docker.go:217] disabling cri-docker service (if available) ...
	I0731 19:30:31.662598  445365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 19:30:31.677993  445365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 19:30:31.695339  445365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 19:30:31.826867  445365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 19:30:31.987460  445365 docker.go:233] disabling docker service ...
	I0731 19:30:31.987524  445365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 19:30:32.002319  445365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 19:30:32.016257  445365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 19:30:32.136724  445365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 19:30:32.253918  445365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 19:30:32.267828  445365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 19:30:32.287270  445365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 19:30:32.287317  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.297573  445365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 19:30:32.297625  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.307953  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.317966  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.328639  445365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 19:30:32.339594  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.350344  445365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.367732  445365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 19:30:32.378104  445365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 19:30:32.387353  445365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 19:30:32.387405  445365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 19:30:32.399806  445365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 19:30:32.409238  445365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:30:32.525701  445365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 19:30:32.682886  445365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 19:30:32.682964  445365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 19:30:32.688511  445365 start.go:563] Will wait 60s for crictl version
	I0731 19:30:32.688568  445365 ssh_runner.go:195] Run: which crictl
	I0731 19:30:32.692463  445365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 19:30:32.732031  445365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 19:30:32.732124  445365 ssh_runner.go:195] Run: crio --version
	I0731 19:30:32.764525  445365 ssh_runner.go:195] Run: crio --version
	I0731 19:30:32.805679  445365 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 19:30:30.685845  444999 pod_ready.go:102] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:33.186539  444999 pod_ready.go:102] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"False"
	I0731 19:30:33.691674  444999 pod_ready.go:92] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:33.691698  444999 pod_ready.go:81] duration metric: took 5.013294318s for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.691711  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.697738  444999 pod_ready.go:92] pod "kube-controller-manager-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:33.697765  444999 pod_ready.go:81] duration metric: took 6.044943ms for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.697778  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.702596  444999 pod_ready.go:92] pod "kube-proxy-499j6" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:33.702624  444999 pod_ready.go:81] duration metric: took 4.837737ms for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:33.702636  444999 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.211929  444999 pod_ready.go:92] pod "kube-scheduler-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:34.211958  444999 pod_ready.go:81] duration metric: took 509.313243ms for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.211968  444999 pod_ready.go:38] duration metric: took 14.551720121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:34.211996  444999 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 19:30:34.230048  444999 ops.go:34] apiserver oom_adj: -16
	I0731 19:30:34.230076  444999 kubeadm.go:597] duration metric: took 34.438051393s to restartPrimaryControlPlane
	I0731 19:30:34.230087  444999 kubeadm.go:394] duration metric: took 34.578892558s to StartCluster
	I0731 19:30:34.230111  444999 settings.go:142] acquiring lock: {Name:mk1436d8602b50b889f1e37b04734d29b98e5c64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:34.230207  444999 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:30:34.231596  444999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/kubeconfig: {Name:mkbef230cd3a0ca6a73f9ef110de3971617d5962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:34.231914  444999 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 19:30:34.232146  444999 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 19:30:34.232367  444999 config.go:182] Loaded profile config "pause-693348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:30:34.233918  444999 out.go:177] * Enabled addons: 
	I0731 19:30:34.233927  444999 out.go:177] * Verifying Kubernetes components...
	I0731 19:30:32.806977  445365 main.go:141] libmachine: (cert-options-235206) Calling .GetIP
	I0731 19:30:32.809743  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:32.810027  445365 main.go:141] libmachine: (cert-options-235206) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:6f:ac", ip: ""} in network mk-cert-options-235206: {Iface:virbr4 ExpiryTime:2024-07-31 20:30:24 +0000 UTC Type:0 Mac:52:54:00:ea:6f:ac Iaid: IPaddr:192.168.83.131 Prefix:24 Hostname:cert-options-235206 Clientid:01:52:54:00:ea:6f:ac}
	I0731 19:30:32.810044  445365 main.go:141] libmachine: (cert-options-235206) DBG | domain cert-options-235206 has defined IP address 192.168.83.131 and MAC address 52:54:00:ea:6f:ac in network mk-cert-options-235206
	I0731 19:30:32.810355  445365 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0731 19:30:32.814796  445365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:30:32.827656  445365 kubeadm.go:883] updating cluster {Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.131 Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 19:30:32.827762  445365 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 19:30:32.827804  445365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:30:32.865615  445365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 19:30:32.865668  445365 ssh_runner.go:195] Run: which lz4
	I0731 19:30:32.869966  445365 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 19:30:32.874203  445365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 19:30:32.874231  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 19:30:34.319994  445365 crio.go:462] duration metric: took 1.45008048s to copy over tarball
	I0731 19:30:34.320089  445365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 19:30:34.235295  444999 addons.go:510] duration metric: took 3.148477ms for enable addons: enabled=[]
	I0731 19:30:34.235423  444999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:30:34.420421  444999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:30:34.435416  444999 node_ready.go:35] waiting up to 6m0s for node "pause-693348" to be "Ready" ...
	I0731 19:30:34.440168  444999 node_ready.go:49] node "pause-693348" has status "Ready":"True"
	I0731 19:30:34.440205  444999 node_ready.go:38] duration metric: took 4.747037ms for node "pause-693348" to be "Ready" ...
	I0731 19:30:34.440220  444999 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:34.446191  444999 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.485356  444999 pod_ready.go:92] pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:34.485381  444999 pod_ready.go:81] duration metric: took 39.155185ms for pod "coredns-7db6d8ff4d-6fnsb" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.485394  444999 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.884349  444999 pod_ready.go:92] pod "etcd-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:34.884405  444999 pod_ready.go:81] duration metric: took 399.000924ms for pod "etcd-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:34.884422  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.283356  444999 pod_ready.go:92] pod "kube-apiserver-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:35.283384  444999 pod_ready.go:81] duration metric: took 398.954086ms for pod "kube-apiserver-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.283393  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.683368  444999 pod_ready.go:92] pod "kube-controller-manager-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:35.683399  444999 pod_ready.go:81] duration metric: took 399.998844ms for pod "kube-controller-manager-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:35.683410  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.083510  444999 pod_ready.go:92] pod "kube-proxy-499j6" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:36.083541  444999 pod_ready.go:81] duration metric: took 400.125086ms for pod "kube-proxy-499j6" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.083552  444999 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.483707  444999 pod_ready.go:92] pod "kube-scheduler-pause-693348" in "kube-system" namespace has status "Ready":"True"
	I0731 19:30:36.483746  444999 pod_ready.go:81] duration metric: took 400.183759ms for pod "kube-scheduler-pause-693348" in "kube-system" namespace to be "Ready" ...
	I0731 19:30:36.483757  444999 pod_ready.go:38] duration metric: took 2.043522081s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 19:30:36.483783  444999 api_server.go:52] waiting for apiserver process to appear ...
	I0731 19:30:36.483854  444999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:30:36.500797  444999 api_server.go:72] duration metric: took 2.268834929s to wait for apiserver process to appear ...
	I0731 19:30:36.500843  444999 api_server.go:88] waiting for apiserver healthz status ...
	I0731 19:30:36.500871  444999 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I0731 19:30:36.506805  444999 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I0731 19:30:36.509137  444999 api_server.go:141] control plane version: v1.30.3
	I0731 19:30:36.509160  444999 api_server.go:131] duration metric: took 8.309276ms to wait for apiserver health ...
	I0731 19:30:36.509169  444999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 19:30:36.685299  444999 system_pods.go:59] 6 kube-system pods found
	I0731 19:30:36.685330  444999 system_pods.go:61] "coredns-7db6d8ff4d-6fnsb" [4e0447f5-1a2d-4a88-ab83-14b300b194af] Running
	I0731 19:30:36.685335  444999 system_pods.go:61] "etcd-pause-693348" [2f708161-103d-4a89-8a2d-e005ca7c8f0e] Running
	I0731 19:30:36.685340  444999 system_pods.go:61] "kube-apiserver-pause-693348" [58648eb7-c37c-4a8a-9c3a-8221ceeaa9cf] Running
	I0731 19:30:36.685345  444999 system_pods.go:61] "kube-controller-manager-pause-693348" [396fa766-3c66-46f0-9a62-46d234c2b878] Running
	I0731 19:30:36.685350  444999 system_pods.go:61] "kube-proxy-499j6" [5b24930f-7b1b-40d6-ba58-03fa2546d7c9] Running
	I0731 19:30:36.685354  444999 system_pods.go:61] "kube-scheduler-pause-693348" [b8f957ce-1f2b-435d-9c29-f899ab03dcf1] Running
	I0731 19:30:36.685363  444999 system_pods.go:74] duration metric: took 176.186483ms to wait for pod list to return data ...
	I0731 19:30:36.685371  444999 default_sa.go:34] waiting for default service account to be created ...
	I0731 19:30:36.883693  444999 default_sa.go:45] found service account: "default"
	I0731 19:30:36.883730  444999 default_sa.go:55] duration metric: took 198.350935ms for default service account to be created ...
	I0731 19:30:36.883749  444999 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 19:30:37.085375  444999 system_pods.go:86] 6 kube-system pods found
	I0731 19:30:37.085406  444999 system_pods.go:89] "coredns-7db6d8ff4d-6fnsb" [4e0447f5-1a2d-4a88-ab83-14b300b194af] Running
	I0731 19:30:37.085412  444999 system_pods.go:89] "etcd-pause-693348" [2f708161-103d-4a89-8a2d-e005ca7c8f0e] Running
	I0731 19:30:37.085416  444999 system_pods.go:89] "kube-apiserver-pause-693348" [58648eb7-c37c-4a8a-9c3a-8221ceeaa9cf] Running
	I0731 19:30:37.085424  444999 system_pods.go:89] "kube-controller-manager-pause-693348" [396fa766-3c66-46f0-9a62-46d234c2b878] Running
	I0731 19:30:37.085427  444999 system_pods.go:89] "kube-proxy-499j6" [5b24930f-7b1b-40d6-ba58-03fa2546d7c9] Running
	I0731 19:30:37.085434  444999 system_pods.go:89] "kube-scheduler-pause-693348" [b8f957ce-1f2b-435d-9c29-f899ab03dcf1] Running
	I0731 19:30:37.085444  444999 system_pods.go:126] duration metric: took 201.687898ms to wait for k8s-apps to be running ...
	I0731 19:30:37.085453  444999 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 19:30:37.085510  444999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:30:37.101665  444999 system_svc.go:56] duration metric: took 16.200153ms WaitForService to wait for kubelet
	I0731 19:30:37.101693  444999 kubeadm.go:582] duration metric: took 2.869738288s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 19:30:37.101712  444999 node_conditions.go:102] verifying NodePressure condition ...
	I0731 19:30:37.282284  444999 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 19:30:37.282309  444999 node_conditions.go:123] node cpu capacity is 2
	I0731 19:30:37.282320  444999 node_conditions.go:105] duration metric: took 180.603263ms to run NodePressure ...
	I0731 19:30:37.282335  444999 start.go:241] waiting for startup goroutines ...
	I0731 19:30:37.282345  444999 start.go:246] waiting for cluster config update ...
	I0731 19:30:37.282355  444999 start.go:255] writing updated cluster config ...
	I0731 19:30:37.282719  444999 ssh_runner.go:195] Run: rm -f paused
	I0731 19:30:37.336748  444999 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 19:30:37.339774  444999 out.go:177] * Done! kubectl is now configured to use "pause-693348" cluster and "default" namespace by default
	I0731 19:30:37.616231  441565 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 19:30:37.616541  441565 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 19:30:37.616565  441565 kubeadm.go:310] 
	I0731 19:30:37.616618  441565 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 19:30:37.616682  441565 kubeadm.go:310] 		timed out waiting for the condition
	I0731 19:30:37.616691  441565 kubeadm.go:310] 
	I0731 19:30:37.616732  441565 kubeadm.go:310] 	This error is likely caused by:
	I0731 19:30:37.616774  441565 kubeadm.go:310] 		- The kubelet is not running
	I0731 19:30:37.616907  441565 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 19:30:37.616921  441565 kubeadm.go:310] 
	I0731 19:30:37.617009  441565 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 19:30:37.617054  441565 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 19:30:37.617101  441565 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 19:30:37.617111  441565 kubeadm.go:310] 
	I0731 19:30:37.617237  441565 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 19:30:37.617340  441565 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 19:30:37.617347  441565 kubeadm.go:310] 
	I0731 19:30:37.617480  441565 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 19:30:37.617592  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 19:30:37.617688  441565 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 19:30:37.617779  441565 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 19:30:37.617786  441565 kubeadm.go:310] 
	I0731 19:30:37.618674  441565 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 19:30:37.618783  441565 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 19:30:37.618878  441565 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 19:30:37.618954  441565 kubeadm.go:394] duration metric: took 3m56.904666471s to StartCluster
	I0731 19:30:37.619032  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 19:30:37.619098  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 19:30:37.673896  441565 cri.go:89] found id: ""
	I0731 19:30:37.673924  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.673934  441565 logs.go:278] No container was found matching "kube-apiserver"
	I0731 19:30:37.673942  441565 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 19:30:37.674013  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 19:30:37.722228  441565 cri.go:89] found id: ""
	I0731 19:30:37.722267  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.722279  441565 logs.go:278] No container was found matching "etcd"
	I0731 19:30:37.722291  441565 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 19:30:37.722363  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 19:30:37.773267  441565 cri.go:89] found id: ""
	I0731 19:30:37.773296  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.773307  441565 logs.go:278] No container was found matching "coredns"
	I0731 19:30:37.773314  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 19:30:37.773381  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 19:30:37.813679  441565 cri.go:89] found id: ""
	I0731 19:30:37.813716  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.813728  441565 logs.go:278] No container was found matching "kube-scheduler"
	I0731 19:30:37.813737  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 19:30:37.813804  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 19:30:37.850740  441565 cri.go:89] found id: ""
	I0731 19:30:37.850769  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.850778  441565 logs.go:278] No container was found matching "kube-proxy"
	I0731 19:30:37.850785  441565 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 19:30:37.850839  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 19:30:37.891443  441565 cri.go:89] found id: ""
	I0731 19:30:37.891474  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.891484  441565 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 19:30:37.891491  441565 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 19:30:37.891558  441565 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 19:30:37.932204  441565 cri.go:89] found id: ""
	I0731 19:30:37.932248  441565 logs.go:276] 0 containers: []
	W0731 19:30:37.932261  441565 logs.go:278] No container was found matching "kindnet"
	I0731 19:30:37.932277  441565 logs.go:123] Gathering logs for kubelet ...
	I0731 19:30:37.932296  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 19:30:37.992472  441565 logs.go:123] Gathering logs for dmesg ...
	I0731 19:30:37.992512  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 19:30:38.008005  441565 logs.go:123] Gathering logs for describe nodes ...
	I0731 19:30:38.008043  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 19:30:38.155717  441565 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 19:30:38.155747  441565 logs.go:123] Gathering logs for CRI-O ...
	I0731 19:30:38.155764  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 19:30:38.269491  441565 logs.go:123] Gathering logs for container status ...
	I0731 19:30:38.269537  441565 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 19:30:38.320851  441565 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 19:30:38.320918  441565 out.go:239] * 
	W0731 19:30:38.320985  441565 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:30:38.321016  441565 out.go:239] * 
	W0731 19:30:38.322107  441565 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 19:30:36.556477  445365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.236362136s)
	I0731 19:30:36.556492  445365 crio.go:469] duration metric: took 2.236477968s to extract the tarball
	I0731 19:30:36.556499  445365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 19:30:36.594364  445365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 19:30:36.643075  445365 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 19:30:36.643088  445365 cache_images.go:84] Images are preloaded, skipping loading
	I0731 19:30:36.643095  445365 kubeadm.go:934] updating node { 192.168.83.131 8555 v1.30.3 crio true true} ...
	I0731 19:30:36.643208  445365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-235206 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.131
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 19:30:36.643278  445365 ssh_runner.go:195] Run: crio config
	I0731 19:30:36.697277  445365 cni.go:84] Creating CNI manager for ""
	I0731 19:30:36.697294  445365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 19:30:36.697308  445365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 19:30:36.697336  445365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.131 APIServerPort:8555 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-235206 NodeName:cert-options-235206 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.131"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.131 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 19:30:36.697511  445365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.131
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-235206"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.131
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.131"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 19:30:36.697579  445365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 19:30:36.708079  445365 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 19:30:36.708141  445365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 19:30:36.717597  445365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0731 19:30:36.734730  445365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 19:30:36.751583  445365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0731 19:30:36.768584  445365 ssh_runner.go:195] Run: grep 192.168.83.131	control-plane.minikube.internal$ /etc/hosts
	I0731 19:30:36.773187  445365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.131	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 19:30:36.786725  445365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 19:30:36.914441  445365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 19:30:36.932642  445365 certs.go:68] Setting up /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206 for IP: 192.168.83.131
	I0731 19:30:36.932658  445365 certs.go:194] generating shared ca certs ...
	I0731 19:30:36.932677  445365 certs.go:226] acquiring lock for ca certs: {Name:mk4fcecdcb85ec33a2df42f56ac1df104becc05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:36.932863  445365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key
	I0731 19:30:36.932896  445365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key
	I0731 19:30:36.932903  445365 certs.go:256] generating profile certs ...
	I0731 19:30:36.932961  445365 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/client.key
	I0731 19:30:36.932969  445365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/client.crt with IP's: []
	I0731 19:30:36.996548  445365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/client.crt ...
	I0731 19:30:36.996565  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/client.crt: {Name:mk860f5e388c7f3dbee574fcdcfd0adcdfc76e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:36.996735  445365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/client.key ...
	I0731 19:30:36.996743  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/client.key: {Name:mk46d17dffc012fb8b114cb32d0f28ff61286414 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:36.996824  445365 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.key.3e82d523
	I0731 19:30:36.996835  445365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.crt.3e82d523 with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.131]
	I0731 19:30:37.170389  445365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.crt.3e82d523 ...
	I0731 19:30:37.170411  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.crt.3e82d523: {Name:mk7b91edae6ba83c08271c81ceefe8da8b0dea23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:37.170611  445365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.key.3e82d523 ...
	I0731 19:30:37.170623  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.key.3e82d523: {Name:mk738f01ff5159ffb42e0e80050994c942b2d094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:37.170708  445365 certs.go:381] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.crt.3e82d523 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.crt
	I0731 19:30:37.170796  445365 certs.go:385] copying /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.key.3e82d523 -> /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.key
	I0731 19:30:37.170869  445365 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.key
	I0731 19:30:37.170888  445365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.crt with IP's: []
	I0731 19:30:37.281873  445365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.crt ...
	I0731 19:30:37.281891  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.crt: {Name:mkd333e1a298be286a1421e015709f9953da77bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:37.282072  445365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.key ...
	I0731 19:30:37.282080  445365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.key: {Name:mkd11e1055ab907235149e9e70b9902b9ad82412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 19:30:37.282257  445365 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem (1338 bytes)
	W0731 19:30:37.282296  445365 certs.go:480] ignoring /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313_empty.pem, impossibly tiny 0 bytes
	I0731 19:30:37.282303  445365 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca-key.pem (1679 bytes)
	I0731 19:30:37.282325  445365 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/ca.pem (1082 bytes)
	I0731 19:30:37.282362  445365 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/cert.pem (1123 bytes)
	I0731 19:30:37.282390  445365 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/certs/key.pem (1675 bytes)
	I0731 19:30:37.282424  445365 certs.go:484] found cert: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem (1708 bytes)
	I0731 19:30:37.283004  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 19:30:37.313808  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0731 19:30:37.340747  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 19:30:37.368961  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 19:30:37.399400  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1480 bytes)
	I0731 19:30:37.426860  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 19:30:37.456976  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 19:30:37.482382  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/cert-options-235206/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 19:30:37.509265  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/ssl/certs/4023132.pem --> /usr/share/ca-certificates/4023132.pem (1708 bytes)
	I0731 19:30:37.533185  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 19:30:37.556769  445365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19356-395032/.minikube/certs/402313.pem --> /usr/share/ca-certificates/402313.pem (1338 bytes)
	I0731 19:30:37.582269  445365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 19:30:37.602253  445365 ssh_runner.go:195] Run: openssl version
	I0731 19:30:37.610237  445365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 19:30:37.625061  445365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:30:37.631899  445365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:30:37.631952  445365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 19:30:37.639948  445365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 19:30:37.652538  445365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/402313.pem && ln -fs /usr/share/ca-certificates/402313.pem /etc/ssl/certs/402313.pem"
	I0731 19:30:37.667409  445365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/402313.pem
	I0731 19:30:37.675781  445365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 18:30 /usr/share/ca-certificates/402313.pem
	I0731 19:30:37.675827  445365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/402313.pem
	I0731 19:30:37.682858  445365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/402313.pem /etc/ssl/certs/51391683.0"
	I0731 19:30:37.706881  445365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4023132.pem && ln -fs /usr/share/ca-certificates/4023132.pem /etc/ssl/certs/4023132.pem"
	I0731 19:30:37.725027  445365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4023132.pem
	I0731 19:30:37.735693  445365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 18:30 /usr/share/ca-certificates/4023132.pem
	I0731 19:30:37.735759  445365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4023132.pem
	I0731 19:30:37.745269  445365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4023132.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 19:30:37.765110  445365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 19:30:37.773129  445365 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 19:30:37.773192  445365 kubeadm.go:392] StartCluster: {Name:cert-options-235206 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:cert-options-235206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.131 Port:8555 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 19:30:37.773300  445365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 19:30:37.773383  445365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 19:30:37.815080  445365 cri.go:89] found id: ""
	I0731 19:30:37.815155  445365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 19:30:37.826137  445365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 19:30:37.837657  445365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 19:30:37.848779  445365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 19:30:37.848788  445365 kubeadm.go:157] found existing configuration files:
	
	I0731 19:30:37.848843  445365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf
	I0731 19:30:37.859894  445365 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 19:30:37.859952  445365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 19:30:37.870155  445365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf
	I0731 19:30:37.880383  445365 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 19:30:37.880453  445365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 19:30:37.893410  445365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf
	I0731 19:30:37.904564  445365 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 19:30:37.904614  445365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 19:30:37.915031  445365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf
	I0731 19:30:37.924845  445365 kubeadm.go:163] "https://control-plane.minikube.internal:8555" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8555 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 19:30:37.924931  445365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 19:30:37.935876  445365 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 19:30:38.073090  445365 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 19:30:38.073149  445365 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 19:30:38.219997  445365 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 19:30:38.220120  445365 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 19:30:38.220255  445365 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 19:30:38.452807  445365 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 19:30:38.562096  441565 out.go:177] 
	W0731 19:30:38.759839  441565 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 19:30:38.759906  441565 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 19:30:38.759946  441565 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 19:30:38.939936  441565 out.go:177] 
	I0731 19:30:38.628960  445365 out.go:204]   - Generating certificates and keys ...
	I0731 19:30:38.629081  445365 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 19:30:38.629202  445365 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 19:30:38.728865  445365 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 19:30:38.793634  445365 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 19:30:38.929700  445365 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 19:30:39.112887  445365 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 19:30:39.238885  445365 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 19:30:39.239479  445365 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-options-235206 localhost] and IPs [192.168.83.131 127.0.0.1 ::1]
	I0731 19:30:39.493991  445365 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 19:30:39.494200  445365 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-options-235206 localhost] and IPs [192.168.83.131 127.0.0.1 ::1]
	I0731 19:30:39.646927  445365 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 19:30:39.778282  445365 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 19:30:40.018452  445365 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 19:30:40.018722  445365 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 19:30:40.259470  445365 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 19:30:40.383094  445365 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 19:30:40.479675  445365 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 19:30:40.620680  445365 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 19:30:40.856498  445365 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 19:30:40.859020  445365 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 19:30:40.861782  445365 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.723221003Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454241723189908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2bb1669c-2671-40d4-bb4b-cdf308728e55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.723964678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36836c9b-c79a-40e9-b55f-dc7cd0775c4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.724088785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36836c9b-c79a-40e9-b55f-dc7cd0775c4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.724352857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36836c9b-c79a-40e9-b55f-dc7cd0775c4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.768588397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=baae8ffb-cf49-407c-bc6c-73f110419037 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.768907067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=baae8ffb-cf49-407c-bc6c-73f110419037 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.770213150Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4deb0c6-f2f8-43b2-854b-a2a3566d2891 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.770626699Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454241770604868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4deb0c6-f2f8-43b2-854b-a2a3566d2891 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.771325587Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ec14d1c-a1cf-405c-a579-2ee402868cbf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.771400547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ec14d1c-a1cf-405c-a579-2ee402868cbf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.771633376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ec14d1c-a1cf-405c-a579-2ee402868cbf name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.816333609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a452934-11ec-4366-8aee-5e5e2a4301a8 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.816409975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a452934-11ec-4366-8aee-5e5e2a4301a8 name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.818131606Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a348c99f-9b5e-4fd8-8570-ee9667b1ae65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.818548933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454241818524707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a348c99f-9b5e-4fd8-8570-ee9667b1ae65 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.819209612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e67d1b93-1933-4302-b898-f8e5f3ca717b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.819291719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e67d1b93-1933-4302-b898-f8e5f3ca717b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.819542145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e67d1b93-1933-4302-b898-f8e5f3ca717b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.864246906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=988c2281-441d-4e57-bcae-4013be70dabf name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.864324645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=988c2281-441d-4e57-bcae-4013be70dabf name=/runtime.v1.RuntimeService/Version
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.865451684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13ad0574-10ad-4524-a839-0128db409b82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.866051911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722454241866026247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13ad0574-10ad-4524-a839-0128db409b82 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.866725590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39c62644-bdbb-4a4b-be8c-057ce7be6e88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.866800038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39c62644-bdbb-4a4b-be8c-057ce7be6e88 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 19:30:41 pause-693348 crio[2237]: time="2024-07-31 19:30:41.867095874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722454215260811947,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722454215237746354,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722454215208475434,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722454215225527377,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722454213263270663,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a09,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c,PodSandboxId:b52b33f7796d2b8b9f49d34c5148a71d404f086f04dbfec63c1b94344100f436,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722454199933483738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containe
rPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355,PodSandboxId:1d6f6f86a3f539e697737b0cf28c97891a65969f93032d739c2be78f27f3c879,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722454199126528842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-499j6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b24930f-7b1b-40d6-ba58-03fa2546d7c9,},Annotations:map[string]string{io.kubernetes.container.hash: db9b0a0
9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6,PodSandboxId:9f57ae7ca762961fd6f9fa9c1d363d9f92521956c8b384b69eabb6a0c0c9d756,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722454199008577806,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a8c5f8327670d8aac40943f84f07a4b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.con
tainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400,PodSandboxId:fee5a6505a942a4d923e7ae2a48f795811ad1be9e5a0ee909820fa061cf9e083,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722454199017983255,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03df059a6706ede11dc1ab46cc2fd86f,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1,PodSandboxId:16bbce28ff47851af85ca1ab68d630d65de8f21a0693ceef0c464874b4e4a61c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722454198919278822,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54023be40c6bec2cd30d1757f3d239df,},Annotations:map[string]string{io.kubernetes.container.hash: 80c23fde,io.kubernetes.container.restartCount: 1,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6,PodSandboxId:26afaaccaa0b4d13e4d1c8463b638b7024a04f76d70acd67dec09fd31f64f84a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722454198915284794,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-693348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bff1ed9eba4aa44f903201e0708ea4d,},Annotations:map[string]string{io.kubernetes.container.hash: 2955e17,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMes
sagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd,PodSandboxId:22d45af9ffc46fab1b59ea955558eb7a6bb7ff712ba1c0126482819f77d1f301,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722454164414095286,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-6fnsb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e0447f5-1a2d-4a88-ab83-14b300b194af,},Annotations:map[string]string{io.kubernetes.container.hash: 52afb0eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39c62644-bdbb-4a4b-be8c-057ce7be6e88 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0ae92c2db4abd       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   26 seconds ago       Running             kube-controller-manager   2                   fee5a6505a942       kube-controller-manager-pause-693348
	e564c6c56ae5e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   26 seconds ago       Running             kube-apiserver            2                   26afaaccaa0b4       kube-apiserver-pause-693348
	51a4f1b908a69       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   26 seconds ago       Running             etcd                      2                   16bbce28ff478       etcd-pause-693348
	5126ebeb8388e       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   26 seconds ago       Running             kube-scheduler            2                   9f57ae7ca7629       kube-scheduler-pause-693348
	26cf1de849324       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   28 seconds ago       Running             kube-proxy                2                   1d6f6f86a3f53       kube-proxy-499j6
	5d2270e9794f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   42 seconds ago       Running             coredns                   1                   b52b33f7796d2       coredns-7db6d8ff4d-6fnsb
	3387ae84fb48c       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   42 seconds ago       Exited              kube-proxy                1                   1d6f6f86a3f53       kube-proxy-499j6
	cc3011c3828ca       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   42 seconds ago       Exited              kube-controller-manager   1                   fee5a6505a942       kube-controller-manager-pause-693348
	d9774e8fdb6ac       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   42 seconds ago       Exited              kube-scheduler            1                   9f57ae7ca7629       kube-scheduler-pause-693348
	d7142c59b8ee8       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   43 seconds ago       Exited              etcd                      1                   16bbce28ff478       etcd-pause-693348
	b9c5fe953bfb4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   43 seconds ago       Exited              kube-apiserver            1                   26afaaccaa0b4       kube-apiserver-pause-693348
	ea0bc030f2a1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   22d45af9ffc46       coredns-7db6d8ff4d-6fnsb
	
	
	==> coredns [5d2270e9794f73d78e143075a4618e73875ec677cb19716b1cf93b738256363c] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58512 - 1748 "HINFO IN 6215870457684000845.2670301909281061572. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013386942s
	
	
	==> coredns [ea0bc030f2a1ada515207383deb14f5251956a954ba2bef28320561a8ec2c5dd] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54066 - 61204 "HINFO IN 3775059066234186191.3492531731191115626. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015422943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-693348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-693348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5d3a875a2675af218d215883e61010adcc3d415c
	                    minikube.k8s.io/name=pause-693348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T19_29_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 19:29:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-693348
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 19:30:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 19:30:18 +0000   Wed, 31 Jul 2024 19:29:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    pause-693348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 734249b4712b4b8db606fe29bdae397b
	  System UUID:                734249b4-712b-4b8d-b606-fe29bdae397b
	  Boot ID:                    71787efe-f098-430f-878c-7b3fc264d21c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6fnsb                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-pause-693348                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kube-apiserver-pause-693348             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-pause-693348    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-499j6                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-pause-693348             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 78s                kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientPID     94s                kubelet          Node pause-693348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s                kubelet          Node pause-693348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                kubelet          Node pause-693348 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 94s                kubelet          Starting kubelet.
	  Normal  NodeReady                93s                kubelet          Node pause-693348 status is now: NodeReady
	  Normal  RegisteredNode           81s                node-controller  Node pause-693348 event: Registered Node pause-693348 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-693348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-693348 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node pause-693348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-693348 event: Registered Node pause-693348 in Controller
	
	
	==> dmesg <==
	[  +9.507494] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064726] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.177902] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.134755] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.263307] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.386881] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.062460] kauditd_printk_skb: 130 callbacks suppressed
	[Jul31 19:29] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.741942] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.797274] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.082555] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.910325] systemd-fstab-generator[1509]: Ignoring "noauto" option for root device
	[  +0.133254] kauditd_printk_skb: 21 callbacks suppressed
	[ +10.959931] kauditd_printk_skb: 69 callbacks suppressed
	[ +21.164703] systemd-fstab-generator[2154]: Ignoring "noauto" option for root device
	[  +0.152768] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.181943] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.168784] systemd-fstab-generator[2193]: Ignoring "noauto" option for root device
	[  +0.279043] systemd-fstab-generator[2221]: Ignoring "noauto" option for root device
	[  +2.184708] systemd-fstab-generator[2348]: Ignoring "noauto" option for root device
	[Jul31 19:30] kauditd_printk_skb: 195 callbacks suppressed
	[ +13.878511] systemd-fstab-generator[3171]: Ignoring "noauto" option for root device
	[  +3.853479] kauditd_printk_skb: 39 callbacks suppressed
	[ +15.962927] systemd-fstab-generator[3506]: Ignoring "noauto" option for root device
	
	
	==> etcd [51a4f1b908a69fbfde81ff312253d5e4e0f2f0e07bb66de14effa6cf04114bf8] <==
	{"level":"info","ts":"2024-07-31T19:30:15.978859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:15.978886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgPreVoteResp from 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:15.978899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became candidate at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.978905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgVoteResp from 5245f38ecce3eccc at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.978917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became leader at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.978924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5245f38ecce3eccc elected leader 5245f38ecce3eccc at term 4"}
	{"level":"info","ts":"2024-07-31T19:30:15.98601Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5245f38ecce3eccc","local-member-attributes":"{Name:pause-693348 ClientURLs:[https://192.168.39.242:2379]}","request-path":"/0/members/5245f38ecce3eccc/attributes","cluster-id":"9dd55050173e419e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:30:15.986098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:15.986513Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:15.997708Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:30:15.999333Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.242:2379"}
	{"level":"info","ts":"2024-07-31T19:30:16.005709Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:16.005756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:38.754527Z","caller":"traceutil/trace.go:171","msg":"trace[1958914525] transaction","detail":"{read_only:false; response_revision:456; number_of_response:1; }","duration":"258.644722ms","start":"2024-07-31T19:30:38.495852Z","end":"2024-07-31T19:30:38.754497Z","steps":["trace[1958914525] 'process raft request'  (duration: 258.512289ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:30:39.300926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.446541ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17063172561502444639 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" mod_revision:443 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-31T19:30:39.301081Z","caller":"traceutil/trace.go:171","msg":"trace[566640949] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:498; }","duration":"502.75952ms","start":"2024-07-31T19:30:38.7983Z","end":"2024-07-31T19:30:39.30106Z","steps":["trace[566640949] 'read index received'  (duration: 375.29408ms)","trace[566640949] 'applied index is now lower than readState.Index'  (duration: 127.463407ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:30:39.301232Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"502.919475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-07-31T19:30:39.301309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.374156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.242\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-07-31T19:30:39.301335Z","caller":"traceutil/trace.go:171","msg":"trace[1535991554] range","detail":"{range_begin:/registry/masterleases/192.168.39.242; range_end:; response_count:1; response_revision:457; }","duration":"144.428247ms","start":"2024-07-31T19:30:39.156899Z","end":"2024-07-31T19:30:39.301327Z","steps":["trace[1535991554] 'agreement among raft nodes before linearized reading'  (duration: 144.378375ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T19:30:39.301387Z","caller":"traceutil/trace.go:171","msg":"trace[1158760084] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:457; }","duration":"503.002648ms","start":"2024-07-31T19:30:38.79827Z","end":"2024-07-31T19:30:39.301273Z","steps":["trace[1158760084] 'agreement among raft nodes before linearized reading'  (duration: 502.878103ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T19:30:39.301429Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:30:38.798255Z","time spent":"503.16169ms","remote":"127.0.0.1:56546","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-07-31T19:30:39.301642Z","caller":"traceutil/trace.go:171","msg":"trace[1077860937] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"652.840504ms","start":"2024-07-31T19:30:38.648788Z","end":"2024-07-31T19:30:39.301629Z","steps":["trace[1077860937] 'process raft request'  (duration: 524.943792ms)","trace[1077860937] 'compare'  (duration: 126.317448ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T19:30:39.301779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:30:38.648769Z","time spent":"652.963002ms","remote":"127.0.0.1:56820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" mod_revision:443 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq\" > >"}
	{"level":"warn","ts":"2024-07-31T19:30:39.622639Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.843413ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17063172561502444644 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:6ccc910a458b9c63>","response":"size:40"}
	{"level":"warn","ts":"2024-07-31T19:30:39.622781Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-31T19:30:39.303788Z","time spent":"318.990448ms","remote":"127.0.0.1:56584","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1] <==
	{"level":"info","ts":"2024-07-31T19:29:59.905333Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2024-07-31T19:30:01.56542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T19:30:01.565588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T19:30:01.565646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgPreVoteResp from 5245f38ecce3eccc at term 2"}
	{"level":"info","ts":"2024-07-31T19:30:01.565784Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.565852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc received MsgVoteResp from 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.565897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5245f38ecce3eccc became leader at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.565935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5245f38ecce3eccc elected leader 5245f38ecce3eccc at term 3"}
	{"level":"info","ts":"2024-07-31T19:30:01.570092Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5245f38ecce3eccc","local-member-attributes":"{Name:pause-693348 ClientURLs:[https://192.168.39.242:2379]}","request-path":"/0/members/5245f38ecce3eccc/attributes","cluster-id":"9dd55050173e419e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T19:30:01.570341Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:01.570548Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T19:30:01.571099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:01.571165Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T19:30:01.573832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T19:30:01.573849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.242:2379"}
	{"level":"info","ts":"2024-07-31T19:30:03.163501Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T19:30:03.164051Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-693348","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.242:2380"],"advertise-client-urls":["https://192.168.39.242:2379"]}
	{"level":"warn","ts":"2024-07-31T19:30:03.164234Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:30:03.164372Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:30:03.180842Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T19:30:03.180897Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.242:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T19:30:03.182845Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"5245f38ecce3eccc","current-leader-member-id":"5245f38ecce3eccc"}
	{"level":"info","ts":"2024-07-31T19:30:03.194783Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2024-07-31T19:30:03.194956Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.242:2380"}
	{"level":"info","ts":"2024-07-31T19:30:03.194982Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-693348","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.242:2380"],"advertise-client-urls":["https://192.168.39.242:2379"]}
	
	
	==> kernel <==
	 19:30:42 up 2 min,  0 users,  load average: 0.68, 0.27, 0.10
	Linux pause-693348 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6] <==
	W0731 19:30:12.422600       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.438038       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.472998       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.493085       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.517215       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.526245       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.592385       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.616949       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.666110       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.674152       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.674346       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.712754       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.787865       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.797243       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.822604       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.889074       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:12.967948       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.046930       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.053983       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.072622       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.084127       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.133583       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.144761       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.261111       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 19:30:13.296701       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e564c6c56ae5e8593ab6a57e090db6366a97156ec5987981afcd14d2d0eb248b] <==
	I0731 19:30:18.079214       1 aggregator.go:165] initial CRD sync complete...
	I0731 19:30:18.079397       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 19:30:18.079445       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 19:30:18.099197       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 19:30:18.141128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 19:30:18.142392       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 19:30:18.143038       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 19:30:18.143050       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 19:30:18.143063       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 19:30:18.148914       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 19:30:18.157152       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 19:30:18.171818       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 19:30:18.180048       1 cache.go:39] Caches are synced for autoregister controller
	I0731 19:30:18.949150       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 19:30:19.502888       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 19:30:19.520718       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 19:30:19.574549       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 19:30:19.615002       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 19:30:19.622034       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 19:30:30.985412       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 19:30:31.084904       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 19:30:39.302476       1 trace.go:236] Trace[1199425645]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:aae71824-2c9d-40b3-9866-206b0ee763af,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:apiserver-mnjmpho2cnxz7dbg2ti6x722vq,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-mnjmpho2cnxz7dbg2ti6x722vq,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:PUT (31-Jul-2024 19:30:38.645) (total time: 657ms):
	Trace[1199425645]: ["GuaranteedUpdate etcd3" audit-id:aae71824-2c9d-40b3-9866-206b0ee763af,key:/leases/kube-system/apiserver-mnjmpho2cnxz7dbg2ti6x722vq,type:*coordination.Lease,resource:leases.coordination.k8s.io 656ms (19:30:38.645)
	Trace[1199425645]:  ---"Txn call completed" 654ms (19:30:39.302)]
	Trace[1199425645]: [657.048463ms] [657.048463ms] END
	
	
	==> kube-controller-manager [0ae92c2db4abd3b86d657fce40e0cc060caec35e5a7a1e93b8ce01c9a83420fa] <==
	I0731 19:30:30.783732       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0731 19:30:30.790813       1 shared_informer.go:320] Caches are synced for node
	I0731 19:30:30.790932       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0731 19:30:30.790973       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0731 19:30:30.791031       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0731 19:30:30.791064       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0731 19:30:30.792792       1 shared_informer.go:320] Caches are synced for service account
	I0731 19:30:30.793991       1 shared_informer.go:320] Caches are synced for deployment
	I0731 19:30:30.795956       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 19:30:30.798313       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 19:30:30.799599       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0731 19:30:30.809293       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 19:30:30.810629       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 19:30:30.814056       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 19:30:30.837359       1 shared_informer.go:320] Caches are synced for HPA
	I0731 19:30:30.842037       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 19:30:30.931253       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 19:30:30.947163       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 19:30:30.959054       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 19:30:30.970981       1 shared_informer.go:320] Caches are synced for job
	I0731 19:30:30.981824       1 shared_informer.go:320] Caches are synced for cronjob
	I0731 19:30:30.992187       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 19:30:31.426902       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:30:31.467994       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 19:30:31.468085       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400] <==
	
	
	==> kube-proxy [26cf1de8493247a09911e3140647e82aa5e1f2de520441ebb203d70e3c645f2f] <==
	I0731 19:30:13.449554       1 server_linux.go:69] "Using iptables proxy"
	E0731 19:30:13.459492       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-693348\": dial tcp 192.168.39.242:8443: connect: connection refused"
	E0731 19:30:14.600258       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-693348\": dial tcp 192.168.39.242:8443: connect: connection refused"
	I0731 19:30:18.103087       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.242"]
	I0731 19:30:18.175429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 19:30:18.175560       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 19:30:18.175686       1 server_linux.go:165] "Using iptables Proxier"
	I0731 19:30:18.180029       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 19:30:18.180613       1 server.go:872] "Version info" version="v1.30.3"
	I0731 19:30:18.180641       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:18.182569       1 config.go:192] "Starting service config controller"
	I0731 19:30:18.182627       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 19:30:18.182994       1 config.go:101] "Starting endpoint slice config controller"
	I0731 19:30:18.183019       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 19:30:18.184273       1 config.go:319] "Starting node config controller"
	I0731 19:30:18.184300       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 19:30:18.283298       1 shared_informer.go:320] Caches are synced for service config
	I0731 19:30:18.283281       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 19:30:18.284328       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3387ae84fb48cd686b23403b1682f148bf7bfa520127fa7b5b312cfad4db5355] <==
	
	
	==> kube-scheduler [5126ebeb8388e3cb0e4be3fbda4550529b9ecd35c79750d2d4b2c3b9ef7a01d7] <==
	I0731 19:30:16.847459       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:30:17.985465       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:30:17.985563       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:30:17.985602       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:30:17.985625       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:30:18.066218       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:30:18.066299       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:18.067779       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:30:18.067881       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:30:18.069968       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:30:18.067951       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 19:30:18.170541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6] <==
	I0731 19:30:00.394790       1 serving.go:380] Generated self-signed cert in-memory
	W0731 19:30:02.993086       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 19:30:02.993130       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 19:30:02.993139       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 19:30:02.993145       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 19:30:03.040750       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 19:30:03.040793       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 19:30:03.046490       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 19:30:03.046707       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 19:30:03.046747       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 19:30:03.046831       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 19:30:03.046908       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0731 19:30:03.047112       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0731 19:30:03.050776       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 19:30:14 pause-693348 kubelet[3178]: I0731 19:30:14.969272    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03df059a6706ede11dc1ab46cc2fd86f-ca-certs\") pod \"kube-controller-manager-pause-693348\" (UID: \"03df059a6706ede11dc1ab46cc2fd86f\") " pod="kube-system/kube-controller-manager-pause-693348"
	Jul 31 19:30:14 pause-693348 kubelet[3178]: I0731 19:30:14.969294    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03df059a6706ede11dc1ab46cc2fd86f-flexvolume-dir\") pod \"kube-controller-manager-pause-693348\" (UID: \"03df059a6706ede11dc1ab46cc2fd86f\") " pod="kube-system/kube-controller-manager-pause-693348"
	Jul 31 19:30:14 pause-693348 kubelet[3178]: I0731 19:30:14.969342    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03df059a6706ede11dc1ab46cc2fd86f-kubeconfig\") pod \"kube-controller-manager-pause-693348\" (UID: \"03df059a6706ede11dc1ab46cc2fd86f\") " pod="kube-system/kube-controller-manager-pause-693348"
	Jul 31 19:30:14 pause-693348 kubelet[3178]: E0731 19:30:14.976479    3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-693348?timeout=10s\": dial tcp 192.168.39.242:8443: connect: connection refused" interval="400ms"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.066645    3178 kubelet_node_status.go:73] "Attempting to register node" node="pause-693348"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: E0731 19:30:15.067573    3178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.242:8443: connect: connection refused" node="pause-693348"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.193348    3178 scope.go:117] "RemoveContainer" containerID="d9774e8fdb6ac53954823eb950b4d52f29100cc60eb25814fb441710eb2255a6"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.195575    3178 scope.go:117] "RemoveContainer" containerID="d7142c59b8ee88cd30f017913a837f2ea0317c89aa626982908f808c02a830b1"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.197768    3178 scope.go:117] "RemoveContainer" containerID="b9c5fe953bfb44e8eb76f260a1a0ae288500f29825bef4ceab51f55168b911c6"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.200084    3178 scope.go:117] "RemoveContainer" containerID="cc3011c3828ca9bd0ea16dec92fbe112660827fb345d6f8667be593efdc31400"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: E0731 19:30:15.377528    3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-693348?timeout=10s\": dial tcp 192.168.39.242:8443: connect: connection refused" interval="800ms"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: I0731 19:30:15.471241    3178 kubelet_node_status.go:73] "Attempting to register node" node="pause-693348"
	Jul 31 19:30:15 pause-693348 kubelet[3178]: E0731 19:30:15.472635    3178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.242:8443: connect: connection refused" node="pause-693348"
	Jul 31 19:30:16 pause-693348 kubelet[3178]: I0731 19:30:16.275029    3178 kubelet_node_status.go:73] "Attempting to register node" node="pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.195188    3178 kubelet_node_status.go:112] "Node was previously registered" node="pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.195289    3178 kubelet_node_status.go:76] "Successfully registered node" node="pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.198176    3178 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.199600    3178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: E0731 19:30:18.631300    3178 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-pause-693348\" already exists" pod="kube-system/etcd-pause-693348"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.739249    3178 apiserver.go:52] "Watching apiserver"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.743631    3178 topology_manager.go:215] "Topology Admit Handler" podUID="4e0447f5-1a2d-4a88-ab83-14b300b194af" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6fnsb"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.744081    3178 topology_manager.go:215] "Topology Admit Handler" podUID="5b24930f-7b1b-40d6-ba58-03fa2546d7c9" podNamespace="kube-system" podName="kube-proxy-499j6"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.763228    3178 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.843184    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b24930f-7b1b-40d6-ba58-03fa2546d7c9-lib-modules\") pod \"kube-proxy-499j6\" (UID: \"5b24930f-7b1b-40d6-ba58-03fa2546d7c9\") " pod="kube-system/kube-proxy-499j6"
	Jul 31 19:30:18 pause-693348 kubelet[3178]: I0731 19:30:18.843253    3178 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b24930f-7b1b-40d6-ba58-03fa2546d7c9-xtables-lock\") pod \"kube-proxy-499j6\" (UID: \"5b24930f-7b1b-40d6-ba58-03fa2546d7c9\") " pod="kube-system/kube-proxy-499j6"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-693348 -n pause-693348
helpers_test.go:261: (dbg) Run:  kubectl --context pause-693348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7200.059s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0731 20:10:32.742536  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 20:10:47.238488  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kindnet-170831/client.crt: no such file or directory
E0731 20:10:48.192333  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/flannel-170831/client.crt: no such file or directory
E0731 20:10:57.344421  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/bridge-170831/client.crt: no such file or directory
E0731 20:11:05.304755  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/old-k8s-version-553149/client.crt: no such file or directory
E0731 20:11:38.484056  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/no-preload-417122/client.crt: no such file or directory
E0731 20:11:46.235124  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/auto-170831/client.crt: no such file or directory
E0731 20:11:47.879074  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/calico-170831/client.crt: no such file or directory
E0731 20:12:27.224967  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/old-k8s-version-553149/client.crt: no such file or directory
E0731 20:12:28.881620  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/custom-flannel-170831/client.crt: no such file or directory
E0731 20:12:44.191114  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/kindnet-170831/client.crt: no such file or directory
E0731 20:13:09.737155  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/enable-default-cni-170831/client.crt: no such file or directory
E0731 20:13:44.832562  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/calico-170831/client.crt: no such file or directory
E0731 20:13:48.017519  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 20:13:51.237681  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/flannel-170831/client.crt: no such file or directory
E0731 20:13:54.639207  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/no-preload-417122/client.crt: no such file or directory
E0731 20:14:00.390689  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/bridge-170831/client.crt: no such file or directory
E0731 20:14:22.325216  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/no-preload-417122/client.crt: no such file or directory
E0731 20:14:25.835510  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/custom-flannel-170831/client.crt: no such file or directory
E0731 20:14:43.381973  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/old-k8s-version-553149/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestStartStop (49m19s)
	TestStartStop/group/default-k8s-diff-port (28m58s)
	TestStartStop/group/default-k8s-diff-port/serial (28m58s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (4m26s)

                                                
                                                
goroutine 8039 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 8 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006f8ea0, 0xc0007b3bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006b44f8, {0x49d5100, 0x2b, 0x2b}, {0x26b6029?, 0xc00093bb00?, 0x4a91a40?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000514640)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000514640)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000706f80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 39 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 38
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2713 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0013c81a0, 0x313e800)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2308
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 7794 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bdd50, 0xc000430dc0}, {0x36b14a0, 0xc001c7f8a0}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bddc0?, 0xc000442000?}, 0x3b9aca00, 0xc00006fd38?, 0x1, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bddc0, 0xc000442000}, 0xc00146c340, {0xc001fb6000, 0x1c}, {0x26816c6, 0x14}, {0x2699286, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36bddc0, 0xc000442000}, 0xc00146c340, {0xc001fb6000, 0x1c}, {0x26845c0?, 0xc0019baf60?}, {0x551133?, 0x4a170f?}, {0xc0001b9100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00146c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00146c340, 0xc001cf0080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3863
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 221 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc0013a0480)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 157
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3167 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00095ad10, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d20540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00095ad40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019c4020, {0x3699f00, 0xc000a6c1b0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019c4020, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3252
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3779 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b7a210, 0x17)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001505560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b7a240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00070de60, {0x3699f00, 0xc000957da0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00070de60, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3769
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 917 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000a693e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 869
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3781 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3780
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1334 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc001730180, 0xc001c0d860)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1333
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 183 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0007d0cd0, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00099cc60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007d0d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00070c8b0, {0x3699f00, 0xc00099e000}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00070c8b0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 932 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000429990, 0x29)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000a692c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0004299c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000802b70, {0x3699f00, 0xc000000120}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000802b70, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 918
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 151 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00099cd80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 152 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007d0d00, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2849 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc001377750, 0xc001377798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x0?, 0xc001377750, 0xc001377798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013777d0?, 0xa12325?, 0xc001d6a4e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2877
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 184 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc000505750, 0xc0012e4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0xce?, 0xc000505750, 0xc000505798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0xc0006f89c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005057d0?, 0x592e44?, 0xc000000360?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 152
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 185 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 184
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 220 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc0013a0480)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 157
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 242 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc00151e120)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 192
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3368 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc00137cf50, 0xc00130ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x80?, 0xc00137cf50, 0xc00137cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0xc0007dc340?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00137cfd0?, 0x592e44?, 0xc001f78180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3311
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 161 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc00151e120)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 192
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3473 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001d212c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3472
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3538 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001d23c80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3522
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3490 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b7bbc0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3472
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3112 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3111
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1399 [select, 100 minutes]:
net/http.(*persistConn).readLoop(0xc0018c2b40)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1397
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3539 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d29800, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3522
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 918 [chan receive, 101 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0004299c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 869
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 933 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc0014bcf50, 0xc00136ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x0?, 0xc0014bcf50, 0xc0014bcf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc000225080?, 0xc000818300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 918
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3780 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc000094750, 0xc001522f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x0?, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0x99b656?, 0xc000225800?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000947d0?, 0x9a9ba5?, 0xc00164d500?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3769
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1201 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b79680, 0xc001d2cd80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1200
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3111 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc000505f50, 0xc000505f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0xc0?, 0xc000505f50, 0xc000505f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000505fd0?, 0x592e44?, 0xc0017bdec0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3072
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3251 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001d20660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3268
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2977 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000074b10, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d20a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000074b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013e0010, {0x3699f00, 0xc001548000}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013e0010, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3007
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3311 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b7a540, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3306
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 732 [IO wait, 105 minutes]:
internal/poll.runtime_pollWait(0x7f0889a474f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x13?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000706680)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000706680)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001670740)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001670740)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00068a0f0, {0x36b0de0, 0xc001670740})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00068a0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00017f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 729
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 3452 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b7bb90, 0x19)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d211a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b7bbc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008dd550, {0x3699f00, 0xc001680b70}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008dd550, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3490
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3007 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000074b40, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3005
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3526 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d297d0, 0x19)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d23b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d29800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001e88e40, {0x3699f00, 0xc0016f4960}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001e88e40, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3539
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3072 [chan receive, 42 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d28b80, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3070
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3367 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001b7a510, 0x19)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001505620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b7a540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000768a80, {0x3699f00, 0xc0007da330}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000768a80, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3311
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2308 [chan receive, 50 minutes]:
testing.(*T).Run(0xc0007dc680, {0x265b689?, 0x551133?}, 0x313e800)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0007dc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0007dc680, 0x313e628)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2877 [chan receive, 44 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007d1080, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2844
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3908 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00169b380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3840
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 5990 [IO wait]:
internal/poll.runtime_pollWait(0x7f0889a47bb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0000cb980?, 0xc001b19800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0000cb980, {0xc001b19800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0000cb980, {0xc001b19800?, 0xc0002ad900?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0013330d8, {0xc001b19800?, 0xc001b1985f?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0014644b0, {0xc001b19800?, 0x0?, 0xc0014644b0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc000155430, {0x369a6a0, 0xc0014644b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000155188, {0x7f0871651470, 0xc002055578}, 0xc0017b1980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000155188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000155188, {0xc0013fa000, 0x1000, 0xc001519c00?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc0017e6540, {0xc002005e00, 0x9, 0x4990c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3698b80, 0xc0017e6540}, {0xc002005e00, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc002005e00, 0x9, 0x17b1dc0?}, {0x3698b80?, 0xc0017e6540?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002005dc0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0017b1fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0019da600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5989
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3453 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc001f4a750, 0xc0007b2f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x60?, 0xc001f4a750, 0xc001f4a798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0xc00017fd40?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc001ff8a80?, 0xc0006ca060?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3490
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3528 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3527
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3527 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc00137df50, 0xc0013b9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0xce?, 0xc00137df50, 0xc00137df98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0xc0013c8000?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00137dfd0?, 0x592e44?, 0xc000001860?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3539
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 934 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 933
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3369 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3368
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3071 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c88600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3070
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3892 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0007d1210, 0x5)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00169b260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007d1240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002064000, {0x3699f00, 0xc00205e000}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002064000, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3909
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3310 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001505740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3306
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3768 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001505800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3759
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1400 [select, 100 minutes]:
net/http.(*persistConn).writeLoop(0xc0018c2b40)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1397
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1253 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b78300, 0xc001f78720)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 824
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 4310 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0019aa790, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d6b6e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019aa7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00070c060, {0x3699f00, 0xc00206a030}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00070c060, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4330
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3454 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3453
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3011 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3010
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3110 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d28b50, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c884e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d28b80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0012f94b0, {0x3699f00, 0xc0012fa8d0}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0012f94b0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3072
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3006 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001d20b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3005
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2898 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2849
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2876 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001d6a4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2844
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3893 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc001961750, 0xc001961798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0xe0?, 0xc001961750, 0xc001961798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0xc0006f9040?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc00164c180?, 0xc001c0c1e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3909
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3252 [chan receive, 40 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00095ad40, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3268
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3769 [chan receive, 37 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b7a240, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3759
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3894 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3893
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4330 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019aa7c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3169 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3168
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4311 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc0014bef50, 0xc0014bef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x40?, 0xc0014bef50, 0xc0014bef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0x99b656?, 0xc00164c300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00164c300?, 0x592e44?, 0xc001f15140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4330
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2848 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0007d1050, 0x1a)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d6a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0007d1080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d58590, {0x3699f00, 0xc001548930}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d58590, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2877
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2716 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0013c8820, {0x265cc34?, 0x0?}, 0xc00098c400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013c8820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013c8820, 0xc001f1e1c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2713
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3863 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00146c1a0, {0x268172a?, 0x60400000004?}, 0xc001cf0080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00146c1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00146c1a0, 0xc00098c400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2716
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3168 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc00195f750, 0xc00130bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x7?, 0xc00195f750, 0xc00195f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0xc0013c91e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00195f7d0?, 0x592e44?, 0xc00205ecf0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3252
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3010 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bdf80, 0xc000060060}, 0xc001f47f50, 0xc001353f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bdf80, 0xc000060060}, 0x0?, 0xc001f47f50, 0xc001f47f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bdf80?, 0xc000060060?}, 0x100000004991a30?, 0xc0004d4a00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001f47fd0?, 0x592e44?, 0xc001f47fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3007
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3909 [chan receive, 26 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0007d1240, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3840
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4329 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001d6b860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 4328
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 4312 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4311
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (226/278)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 54.67
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 13.77
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 54.17
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 66.05
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 211.94
40 TestAddons/serial/GCPAuth/Namespaces 2.77
42 TestAddons/parallel/Registry 18.36
44 TestAddons/parallel/InspektorGadget 10.8
46 TestAddons/parallel/HelmTiller 11.94
48 TestAddons/parallel/CSI 53.1
49 TestAddons/parallel/Headlamp 23.05
50 TestAddons/parallel/CloudSpanner 5.64
51 TestAddons/parallel/LocalPath 59.09
52 TestAddons/parallel/NvidiaDevicePlugin 6.88
53 TestAddons/parallel/Yakd 12.06
55 TestCertOptions 42.01
56 TestCertExpiration 279.19
58 TestForceSystemdFlag 99.75
59 TestForceSystemdEnv 72.29
61 TestKVMDriverInstallOrUpdate 5.02
65 TestErrorSpam/setup 44.19
66 TestErrorSpam/start 0.35
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.54
69 TestErrorSpam/unpause 1.61
70 TestErrorSpam/stop 5.33
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 97.92
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.13
77 TestFunctional/serial/KubeContext 0.05
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.41
82 TestFunctional/serial/CacheCmd/cache/add_local 2.24
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
87 TestFunctional/serial/CacheCmd/cache/delete 0.1
88 TestFunctional/serial/MinikubeKubectlCmd 0.11
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 34.88
91 TestFunctional/serial/ComponentHealth 0.07
92 TestFunctional/serial/LogsCmd 1.4
93 TestFunctional/serial/LogsFileCmd 1.44
94 TestFunctional/serial/InvalidService 3.99
96 TestFunctional/parallel/ConfigCmd 0.35
97 TestFunctional/parallel/DashboardCmd 19.24
98 TestFunctional/parallel/DryRun 0.28
99 TestFunctional/parallel/InternationalLanguage 0.15
100 TestFunctional/parallel/StatusCmd 0.91
104 TestFunctional/parallel/ServiceCmdConnect 8.93
105 TestFunctional/parallel/AddonsCmd 0.13
106 TestFunctional/parallel/PersistentVolumeClaim 50.89
108 TestFunctional/parallel/SSHCmd 0.48
109 TestFunctional/parallel/CpCmd 1.24
110 TestFunctional/parallel/MySQL 25.69
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.28
116 TestFunctional/parallel/NodeLabels 0.09
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
120 TestFunctional/parallel/License 0.64
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
126 TestFunctional/parallel/ImageCommands/Setup 1.94
127 TestFunctional/parallel/Version/short 0.05
128 TestFunctional/parallel/Version/components 0.75
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
132 TestFunctional/parallel/ServiceCmd/DeployApp 24.19
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.23
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.99
146 TestFunctional/parallel/ImageCommands/ImageRemove 1
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.05
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
149 TestFunctional/parallel/ServiceCmd/List 0.64
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
152 TestFunctional/parallel/ServiceCmd/Format 0.3
153 TestFunctional/parallel/ServiceCmd/URL 0.33
154 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
155 TestFunctional/parallel/ProfileCmd/profile_list 0.28
156 TestFunctional/parallel/ProfileCmd/profile_json_output 0.29
157 TestFunctional/parallel/MountCmd/any-port 8.42
158 TestFunctional/parallel/MountCmd/specific-port 1.79
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 216.11
167 TestMultiControlPlane/serial/DeployApp 6.39
168 TestMultiControlPlane/serial/PingHostFromPods 1.24
169 TestMultiControlPlane/serial/AddWorkerNode 60.21
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
172 TestMultiControlPlane/serial/CopyFile 13.04
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.25
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
181 TestMultiControlPlane/serial/RestartCluster 280.2
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 79.3
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
188 TestJSONOutput/start/Command 97.37
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.73
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.64
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 6.69
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 91.36
220 TestMountStart/serial/StartWithMountFirst 28.18
221 TestMountStart/serial/VerifyMountFirst 0.37
222 TestMountStart/serial/StartWithMountSecond 32.76
223 TestMountStart/serial/VerifyMountSecond 0.38
224 TestMountStart/serial/DeleteFirst 0.65
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 21.74
228 TestMountStart/serial/VerifyMountPostStop 0.38
231 TestMultiNode/serial/FreshStart2Nodes 127.69
232 TestMultiNode/serial/DeployApp2Nodes 5.51
233 TestMultiNode/serial/PingHostFrom2Pods 0.81
234 TestMultiNode/serial/AddNode 52.19
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.19
238 TestMultiNode/serial/StopNode 2.34
239 TestMultiNode/serial/StartAfterStop 39.3
241 TestMultiNode/serial/DeleteNode 2.49
243 TestMultiNode/serial/RestartMultiNode 182.33
244 TestMultiNode/serial/ValidateNameConflict 45.22
251 TestScheduledStopUnix 115.45
255 TestRunningBinaryUpgrade 215.54
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 96.9
269 TestNetworkPlugins/group/false 3
280 TestStoppedBinaryUpgrade/Setup 2.59
281 TestStoppedBinaryUpgrade/Upgrade 156.64
282 TestNoKubernetes/serial/StartWithStopK8s 63.9
283 TestNoKubernetes/serial/Start 26.95
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
285 TestNoKubernetes/serial/ProfileList 29.3
286 TestNoKubernetes/serial/Stop 1.41
287 TestNoKubernetes/serial/StartNoArgs 22.1
289 TestPause/serial/Start 77.93
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.86
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
293 TestNetworkPlugins/group/auto/Start 61.13
294 TestNetworkPlugins/group/kindnet/Start 112.66
295 TestNetworkPlugins/group/auto/KubeletFlags 0.24
296 TestNetworkPlugins/group/auto/NetCatPod 10.26
297 TestNetworkPlugins/group/auto/DNS 0.24
298 TestNetworkPlugins/group/auto/Localhost 0.16
299 TestNetworkPlugins/group/auto/HairPin 0.19
300 TestNetworkPlugins/group/calico/Start 91.93
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
303 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
304 TestNetworkPlugins/group/custom-flannel/Start 84.82
305 TestNetworkPlugins/group/kindnet/DNS 0.19
306 TestNetworkPlugins/group/kindnet/Localhost 0.21
307 TestNetworkPlugins/group/kindnet/HairPin 0.17
308 TestNetworkPlugins/group/enable-default-cni/Start 108.74
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.2
311 TestNetworkPlugins/group/calico/NetCatPod 10.64
312 TestNetworkPlugins/group/calico/DNS 0.53
313 TestNetworkPlugins/group/calico/Localhost 0.15
314 TestNetworkPlugins/group/calico/HairPin 0.14
315 TestNetworkPlugins/group/flannel/Start 89.27
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.29
318 TestNetworkPlugins/group/custom-flannel/DNS 0.17
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
321 TestNetworkPlugins/group/bridge/Start 62.51
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
331 TestNetworkPlugins/group/flannel/NetCatPod 11.28
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
333 TestNetworkPlugins/group/bridge/NetCatPod 11.28
334 TestNetworkPlugins/group/flannel/DNS 0.19
335 TestNetworkPlugins/group/flannel/Localhost 0.17
336 TestNetworkPlugins/group/flannel/HairPin 0.14
337 TestNetworkPlugins/group/bridge/DNS 33.14
340 TestNetworkPlugins/group/bridge/Localhost 0.15
341 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (54.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498297 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498297 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (54.667449702s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (54.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498297
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498297: exit status 85 (61.209124ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-498297 | jenkins | v1.33.1 | 31 Jul 24 18:14 UTC |          |
	|         | -p download-only-498297        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:14:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:14:53.261345  402325 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:14:53.261627  402325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:14:53.261637  402325 out.go:304] Setting ErrFile to fd 2...
	I0731 18:14:53.261641  402325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:14:53.261809  402325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	W0731 18:14:53.261926  402325 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19356-395032/.minikube/config/config.json: open /home/jenkins/minikube-integration/19356-395032/.minikube/config/config.json: no such file or directory
	I0731 18:14:53.262549  402325 out.go:298] Setting JSON to true
	I0731 18:14:53.263478  402325 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7036,"bootTime":1722442657,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:14:53.263542  402325 start.go:139] virtualization: kvm guest
	I0731 18:14:53.265894  402325 out.go:97] [download-only-498297] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0731 18:14:53.265996  402325 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 18:14:53.266032  402325 notify.go:220] Checking for updates...
	I0731 18:14:53.267393  402325 out.go:169] MINIKUBE_LOCATION=19356
	I0731 18:14:53.268878  402325 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:14:53.270335  402325 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:14:53.271844  402325 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:14:53.273306  402325 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 18:14:53.276171  402325 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 18:14:53.276420  402325 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:14:53.309844  402325 out.go:97] Using the kvm2 driver based on user configuration
	I0731 18:14:53.309877  402325 start.go:297] selected driver: kvm2
	I0731 18:14:53.309883  402325 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:14:53.310249  402325 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:14:53.310343  402325 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:14:53.326182  402325 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:14:53.326255  402325 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:14:53.326930  402325 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 18:14:53.327168  402325 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 18:14:53.327203  402325 cni.go:84] Creating CNI manager for ""
	I0731 18:14:53.327218  402325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:14:53.327233  402325 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:14:53.327306  402325 start.go:340] cluster config:
	{Name:download-only-498297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-498297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:14:53.327536  402325 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:14:53.329658  402325 out.go:97] Downloading VM boot image ...
	I0731 18:14:53.329712  402325 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 18:15:04.028452  402325 out.go:97] Starting "download-only-498297" primary control-plane node in "download-only-498297" cluster
	I0731 18:15:04.028492  402325 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:15:04.140972  402325 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:15:04.141022  402325 cache.go:56] Caching tarball of preloaded images
	I0731 18:15:04.141201  402325 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:15:04.143277  402325 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 18:15:04.143303  402325 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:15:04.253873  402325 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:15:17.818096  402325 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:15:17.818191  402325 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:15:18.719117  402325 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 18:15:18.719473  402325 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/download-only-498297/config.json ...
	I0731 18:15:18.719507  402325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/download-only-498297/config.json: {Name:mkd5852d9da44ed2a4b189dd5c4e882f92375f8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:15:18.719659  402325 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 18:15:18.719837  402325 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-498297 host does not exist
	  To start a cluster, run: "minikube start -p download-only-498297"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-498297
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (13.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-445232 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-445232 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.767042574s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (13.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-445232
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-445232: exit status 85 (60.917582ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-498297 | jenkins | v1.33.1 | 31 Jul 24 18:14 UTC |                     |
	|         | -p download-only-498297        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 18:15 UTC | 31 Jul 24 18:15 UTC |
	| delete  | -p download-only-498297        | download-only-498297 | jenkins | v1.33.1 | 31 Jul 24 18:15 UTC | 31 Jul 24 18:15 UTC |
	| start   | -o=json --download-only        | download-only-445232 | jenkins | v1.33.1 | 31 Jul 24 18:15 UTC |                     |
	|         | -p download-only-445232        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:15:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:15:48.255049  402675 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:15:48.255179  402675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:15:48.255188  402675 out.go:304] Setting ErrFile to fd 2...
	I0731 18:15:48.255192  402675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:15:48.255369  402675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:15:48.255920  402675 out.go:298] Setting JSON to true
	I0731 18:15:48.256879  402675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7091,"bootTime":1722442657,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:15:48.256978  402675 start.go:139] virtualization: kvm guest
	I0731 18:15:48.259024  402675 out.go:97] [download-only-445232] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:15:48.259180  402675 notify.go:220] Checking for updates...
	I0731 18:15:48.260475  402675 out.go:169] MINIKUBE_LOCATION=19356
	I0731 18:15:48.262090  402675 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:15:48.263590  402675 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:15:48.264922  402675 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:15:48.266227  402675 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 18:15:48.268757  402675 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 18:15:48.269008  402675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:15:48.300392  402675 out.go:97] Using the kvm2 driver based on user configuration
	I0731 18:15:48.300428  402675 start.go:297] selected driver: kvm2
	I0731 18:15:48.300433  402675 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:15:48.300840  402675 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:15:48.300943  402675 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:15:48.316309  402675 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:15:48.316398  402675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:15:48.316884  402675 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 18:15:48.317036  402675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 18:15:48.317105  402675 cni.go:84] Creating CNI manager for ""
	I0731 18:15:48.317117  402675 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:15:48.317124  402675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:15:48.317193  402675 start.go:340] cluster config:
	{Name:download-only-445232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-445232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:15:48.317292  402675 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:15:48.318773  402675 out.go:97] Starting "download-only-445232" primary control-plane node in "download-only-445232" cluster
	I0731 18:15:48.318797  402675 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:15:48.436146  402675 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 18:15:48.436189  402675 cache.go:56] Caching tarball of preloaded images
	I0731 18:15:48.436345  402675 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 18:15:48.438190  402675 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0731 18:15:48.438209  402675 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:15:48.549186  402675 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-445232 host does not exist
	  To start a cluster, run: "minikube start -p download-only-445232"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-445232
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (54.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-127403 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-127403 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (54.170363663s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (54.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-127403
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-127403: exit status 85 (62.177355ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-498297 | jenkins | v1.33.1 | 31 Jul 24 18:14 UTC |                     |
	|         | -p download-only-498297             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 18:15 UTC | 31 Jul 24 18:15 UTC |
	| delete  | -p download-only-498297             | download-only-498297 | jenkins | v1.33.1 | 31 Jul 24 18:15 UTC | 31 Jul 24 18:15 UTC |
	| start   | -o=json --download-only             | download-only-445232 | jenkins | v1.33.1 | 31 Jul 24 18:15 UTC |                     |
	|         | -p download-only-445232             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| delete  | -p download-only-445232             | download-only-445232 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC | 31 Jul 24 18:16 UTC |
	| start   | -o=json --download-only             | download-only-127403 | jenkins | v1.33.1 | 31 Jul 24 18:16 UTC |                     |
	|         | -p download-only-127403             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 18:16:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 18:16:02.350326  402895 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:16:02.350592  402895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:16:02.350604  402895 out.go:304] Setting ErrFile to fd 2...
	I0731 18:16:02.350610  402895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:16:02.350811  402895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:16:02.351406  402895 out.go:298] Setting JSON to true
	I0731 18:16:02.352349  402895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7105,"bootTime":1722442657,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:16:02.352454  402895 start.go:139] virtualization: kvm guest
	I0731 18:16:02.354541  402895 out.go:97] [download-only-127403] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:16:02.354701  402895 notify.go:220] Checking for updates...
	I0731 18:16:02.356055  402895 out.go:169] MINIKUBE_LOCATION=19356
	I0731 18:16:02.357525  402895 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:16:02.358905  402895 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:16:02.360351  402895 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:16:02.361662  402895 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 18:16:02.364006  402895 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 18:16:02.364214  402895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:16:02.395877  402895 out.go:97] Using the kvm2 driver based on user configuration
	I0731 18:16:02.395931  402895 start.go:297] selected driver: kvm2
	I0731 18:16:02.395941  402895 start.go:901] validating driver "kvm2" against <nil>
	I0731 18:16:02.396271  402895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:16:02.396356  402895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19356-395032/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 18:16:02.410865  402895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 18:16:02.410917  402895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 18:16:02.411445  402895 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 18:16:02.411609  402895 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 18:16:02.411637  402895 cni.go:84] Creating CNI manager for ""
	I0731 18:16:02.411646  402895 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 18:16:02.411657  402895 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 18:16:02.411720  402895 start.go:340] cluster config:
	{Name:download-only-127403 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-127403 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:16:02.411815  402895 iso.go:125] acquiring lock: {Name:mk8518875fbe243360caf271fccf05c9f8190836 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 18:16:02.413463  402895 out.go:97] Starting "download-only-127403" primary control-plane node in "download-only-127403" cluster
	I0731 18:16:02.413489  402895 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:16:02.576928  402895 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:16:02.576984  402895 cache.go:56] Caching tarball of preloaded images
	I0731 18:16:02.577171  402895 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:16:02.579073  402895 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 18:16:02.579113  402895 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:16:02.692991  402895 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 18:16:14.268930  402895 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:16:14.269026  402895 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19356-395032/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 18:16:15.004512  402895 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 18:16:15.004891  402895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/download-only-127403/config.json ...
	I0731 18:16:15.004920  402895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/download-only-127403/config.json: {Name:mk89d7e99e0870e6ed14c96a5dd358595371d9c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 18:16:15.005076  402895 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 18:16:15.005206  402895 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19356-395032/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-127403 host does not exist
	  To start a cluster, run: "minikube start -p download-only-127403"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-127403
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-995532 --alsologtostderr --binary-mirror http://127.0.0.1:39497 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-995532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-995532
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (66.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-954897 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-954897 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m5.000594401s)
helpers_test.go:175: Cleaning up "offline-crio-954897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-954897
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-954897: (1.05274456s)
--- PASS: TestOffline (66.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-469211
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-469211: exit status 85 (52.624815ms)

                                                
                                                
-- stdout --
	* Profile "addons-469211" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-469211"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-469211
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-469211: exit status 85 (52.981427ms)

                                                
                                                
-- stdout --
	* Profile "addons-469211" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-469211"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (211.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-469211 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-469211 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m31.944552414s)
--- PASS: TestAddons/Setup (211.94s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.77s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-469211 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-469211 get secret gcp-auth -n new-namespace
addons_test.go:670: (dbg) Non-zero exit: kubectl --context addons-469211 get secret gcp-auth -n new-namespace: exit status 1 (80.511161ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:662: (dbg) Run:  kubectl --context addons-469211 logs -l app=gcp-auth -n gcp-auth
addons_test.go:670: (dbg) Run:  kubectl --context addons-469211 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.428547ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-zzckf" [c1bb2989-95fe-499e-a046-21d50fcaa446] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004726412s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gkcvq" [5d23ea46-e28f-4922-8b86-7e1f8ea26754] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004446459s
addons_test.go:342: (dbg) Run:  kubectl --context addons-469211 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-469211 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-469211 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.466389887s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 ip
2024/07/31 18:21:08 [DEBUG] GET http://192.168.39.187:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rqzlw" [fb0dd451-4ba2-46a8-87f6-b78b48ca6b90] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004640761s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-469211
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-469211: (5.792091502s)
--- PASS: TestAddons/parallel/InspektorGadget (10.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.231656ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-8hlxh" [d2d05195-43ba-4de7-91ee-2237d543c3b1] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005549704s
addons_test.go:475: (dbg) Run:  kubectl --context addons-469211 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-469211 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.307969986s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.420127ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-469211 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-469211 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [95d3218d-ba36-43dc-9d37-8f337316412a] Pending
helpers_test.go:344: "task-pv-pod" [95d3218d-ba36-43dc-9d37-8f337316412a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [95d3218d-ba36-43dc-9d37-8f337316412a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004166499s
addons_test.go:590: (dbg) Run:  kubectl --context addons-469211 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-469211 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-469211 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-469211 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-469211 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-469211 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-469211 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1d234226-7be3-44bb-8f80-92c3f5754f74] Pending
helpers_test.go:344: "task-pv-pod-restore" [1d234226-7be3-44bb-8f80-92c3f5754f74] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1d234226-7be3-44bb-8f80-92c3f5754f74] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004337158s
addons_test.go:632: (dbg) Run:  kubectl --context addons-469211 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-469211 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-469211 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.780029002s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-469211 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-469211 --alsologtostderr -v=1: (1.14889033s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-88t72" [1a993f37-d56c-41b4-8501-f325bd272dd8] Pending
helpers_test.go:344: "headlamp-7867546754-88t72" [1a993f37-d56c-41b4-8501-f325bd272dd8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-88t72" [1a993f37-d56c-41b4-8501-f325bd272dd8] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004660746s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 addons disable headlamp --alsologtostderr -v=1: (5.894958902s)
--- PASS: TestAddons/parallel/Headlamp (23.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-wfk4z" [8feeaf35-bd9f-49a7-8191-b8c16033d425] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004354477s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-469211
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-469211 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-469211 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [635a14c9-6dc6-4846-857f-217e20b1adfe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [635a14c9-6dc6-4846-857f-217e20b1adfe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [635a14c9-6dc6-4846-857f-217e20b1adfe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.009013263s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-469211 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 ssh "cat /opt/local-path-provisioner/pvc-81750708-88a8-4465-b0b3-553afcc3b33e_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-469211 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-469211 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.247585609s)
--- PASS: TestAddons/parallel/LocalPath (59.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rnrgk" [63c8e69d-6346-4ca1-869b-ff23aa567942] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005242966s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-469211
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-f5w2q" [4fbde77c-7e64-4429-9619-b3c9170551ce] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004795857s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-469211 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-469211 addons disable yakd --alsologtostderr -v=1: (6.050233342s)
--- PASS: TestAddons/parallel/Yakd (12.06s)

                                                
                                    
x
+
TestCertOptions (42.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-235206 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0731 19:30:32.743142  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-235206 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (40.528652665s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-235206 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-235206 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-235206 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-235206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-235206
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-235206: (1.012420393s)
--- PASS: TestCertOptions (42.01s)

                                                
                                    
x
+
TestCertExpiration (279.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362350 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362350 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m18.106559976s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-362350 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-362350 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (20.035557188s)
helpers_test.go:175: Cleaning up "cert-expiration-362350" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-362350
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-362350: (1.047648207s)
--- PASS: TestCertExpiration (279.19s)

                                                
                                    
x
+
TestForceSystemdFlag (99.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-748014 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0731 19:28:31.066383  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 19:28:48.018354  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-748014 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m38.722462326s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-748014 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-748014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-748014
--- PASS: TestForceSystemdFlag (99.75s)

                                                
                                    
x
+
TestForceSystemdEnv (72.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-114834 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-114834 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.477584025s)
helpers_test.go:175: Cleaning up "force-systemd-env-114834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-114834
--- PASS: TestForceSystemdEnv (72.29s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.02s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.02s)

                                                
                                    
x
+
TestErrorSpam/setup (44.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-624066 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-624066 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-624066 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-624066 --driver=kvm2  --container-runtime=crio: (44.186451286s)
--- PASS: TestErrorSpam/setup (44.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 pause
E0731 18:30:32.742638  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:32.748575  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:32.758900  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:32.779232  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:32.819605  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:32.900133  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 pause
E0731 18:30:33.061004  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:33.381644  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 unpause
E0731 18:30:34.022374  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 unpause
--- PASS: TestErrorSpam/unpause (1.61s)

                                                
                                    
x
+
TestErrorSpam/stop (5.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 stop
E0731 18:30:35.302701  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 stop: (2.271261711s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 stop
E0731 18:30:37.863025  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 stop: (1.782697977s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-624066 --log_dir /tmp/nospam-624066 stop: (1.277446596s)
--- PASS: TestErrorSpam/stop (5.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19356-395032/.minikube/files/etc/test/nested/copy/402313/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780909 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0731 18:30:42.984141  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:30:53.224419  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:31:13.705678  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:31:54.666614  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-780909 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.917636544s)
--- PASS: TestFunctional/serial/StartWithProxy (97.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780909 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-780909 --alsologtostderr -v=8: (39.132159731s)
functional_test.go:659: soft start took 39.133057317s for "functional-780909" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-780909 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 cache add registry.k8s.io/pause:3.1: (1.109131774s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 cache add registry.k8s.io/pause:3.3: (1.237326823s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 cache add registry.k8s.io/pause:latest: (1.061862538s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-780909 /tmp/TestFunctionalserialCacheCmdcacheadd_local1485864944/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cache add minikube-local-cache-test:functional-780909
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 cache add minikube-local-cache-test:functional-780909: (1.904358812s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cache delete minikube-local-cache-test:functional-780909
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-780909
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.470592ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 kubectl -- --context functional-780909 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-780909 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780909 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0731 18:33:16.588161  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-780909 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.878672829s)
functional_test.go:757: restart took 34.878811s for "functional-780909" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-780909 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 logs: (1.402967941s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 logs --file /tmp/TestFunctionalserialLogsFileCmd1299694605/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 logs --file /tmp/TestFunctionalserialLogsFileCmd1299694605/001/logs.txt: (1.436868792s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-780909 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-780909
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-780909: exit status 115 (276.609346ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.37:31393 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-780909 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 config get cpus: exit status 14 (56.319608ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 config get cpus: exit status 14 (57.026492ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-780909 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-780909 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 413039: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780909 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-780909 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.166284ms)

                                                
                                                
-- stdout --
	* [functional-780909] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:34:15.540687  412648 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:34:15.540807  412648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:15.540817  412648 out.go:304] Setting ErrFile to fd 2...
	I0731 18:34:15.540821  412648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:15.541009  412648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:34:15.541537  412648 out.go:298] Setting JSON to false
	I0731 18:34:15.542587  412648 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8199,"bootTime":1722442657,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:34:15.542649  412648 start.go:139] virtualization: kvm guest
	I0731 18:34:15.545116  412648 out.go:177] * [functional-780909] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 18:34:15.546501  412648 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:34:15.546566  412648 notify.go:220] Checking for updates...
	I0731 18:34:15.549392  412648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:34:15.550673  412648 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:34:15.551981  412648 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:15.553457  412648 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:34:15.554733  412648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:34:15.556409  412648 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:34:15.556815  412648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:34:15.556896  412648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:34:15.572920  412648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0731 18:34:15.573437  412648 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:34:15.573977  412648 main.go:141] libmachine: Using API Version  1
	I0731 18:34:15.573992  412648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:34:15.574259  412648 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:34:15.574443  412648 main.go:141] libmachine: (functional-780909) Calling .DriverName
	I0731 18:34:15.574715  412648 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:34:15.575076  412648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:34:15.575117  412648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:34:15.590553  412648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I0731 18:34:15.591038  412648 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:34:15.591567  412648 main.go:141] libmachine: Using API Version  1
	I0731 18:34:15.591587  412648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:34:15.591909  412648 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:34:15.592112  412648 main.go:141] libmachine: (functional-780909) Calling .DriverName
	I0731 18:34:15.628027  412648 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 18:34:15.629374  412648 start.go:297] selected driver: kvm2
	I0731 18:34:15.629389  412648 start.go:901] validating driver "kvm2" against &{Name:functional-780909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-780909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.37 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:34:15.629529  412648 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:34:15.631705  412648 out.go:177] 
	W0731 18:34:15.633228  412648 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 18:34:15.634659  412648 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780909 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-780909 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-780909 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.874309ms)

                                                
                                                
-- stdout --
	* [functional-780909] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 18:34:15.933782  412803 out.go:291] Setting OutFile to fd 1 ...
	I0731 18:34:15.933991  412803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:15.934003  412803 out.go:304] Setting ErrFile to fd 2...
	I0731 18:34:15.934008  412803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 18:34:15.934326  412803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 18:34:15.934917  412803 out.go:298] Setting JSON to false
	I0731 18:34:15.936162  412803 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8199,"bootTime":1722442657,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 18:34:15.936228  412803 start.go:139] virtualization: kvm guest
	I0731 18:34:15.938439  412803 out.go:177] * [functional-780909] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 18:34:15.940079  412803 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 18:34:15.940089  412803 notify.go:220] Checking for updates...
	I0731 18:34:15.942642  412803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 18:34:15.943862  412803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 18:34:15.945108  412803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 18:34:15.946269  412803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 18:34:15.947451  412803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 18:34:15.949017  412803 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 18:34:15.949462  412803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:34:15.949533  412803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:34:15.966251  412803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0731 18:34:15.966664  412803 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:34:15.967164  412803 main.go:141] libmachine: Using API Version  1
	I0731 18:34:15.967184  412803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:34:15.967507  412803 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:34:15.967732  412803 main.go:141] libmachine: (functional-780909) Calling .DriverName
	I0731 18:34:15.968034  412803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 18:34:15.968318  412803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 18:34:15.968356  412803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 18:34:15.987670  412803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0731 18:34:15.988205  412803 main.go:141] libmachine: () Calling .GetVersion
	I0731 18:34:15.988820  412803 main.go:141] libmachine: Using API Version  1
	I0731 18:34:15.988849  412803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 18:34:15.989243  412803 main.go:141] libmachine: () Calling .GetMachineName
	I0731 18:34:15.989500  412803 main.go:141] libmachine: (functional-780909) Calling .DriverName
	I0731 18:34:16.024589  412803 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0731 18:34:16.026037  412803 start.go:297] selected driver: kvm2
	I0731 18:34:16.026060  412803 start.go:901] validating driver "kvm2" against &{Name:functional-780909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-780909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.37 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 18:34:16.026201  412803 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 18:34:16.028568  412803 out.go:177] 
	W0731 18:34:16.029744  412803 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 18:34:16.031020  412803 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-780909 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-780909 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-6jq7d" [3f8948f5-b714-4c13-84e5-df92f7738640] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-6jq7d" [3f8948f5-b714-4c13-84e5-df92f7738640] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00354721s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.37:31806
functional_test.go:1671: http://192.168.39.37:31806: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-6jq7d

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.37:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.37:31806
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [76679134-5d9a-4288-89dd-30db0f4adc7e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005332267s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-780909 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-780909 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-780909 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-780909 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-780909 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bd7b23c7-394e-40ef-adcd-02f95e7f4887] Pending
helpers_test.go:344: "sp-pod" [bd7b23c7-394e-40ef-adcd-02f95e7f4887] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bd7b23c7-394e-40ef-adcd-02f95e7f4887] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.010312856s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-780909 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-780909 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-780909 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [54fbeaaa-e5bb-4e5f-83d4-5d3322a14a9c] Pending
helpers_test.go:344: "sp-pod" [54fbeaaa-e5bb-4e5f-83d4-5d3322a14a9c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [54fbeaaa-e5bb-4e5f-83d4-5d3322a14a9c] Running
2024/07/31 18:34:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004681208s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-780909 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh -n functional-780909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cp functional-780909:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd774905452/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh -n functional-780909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh -n functional-780909 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-780909 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-5ps2j" [7b9da917-63ba-483a-b5f3-deb4ac619f69] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-5ps2j" [7b9da917-63ba-483a-b5f3-deb4ac619f69] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.006336245s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-780909 exec mysql-64454c8b5c-5ps2j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-780909 exec mysql-64454c8b5c-5ps2j -- mysql -ppassword -e "show databases;": exit status 1 (484.615818ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-780909 exec mysql-64454c8b5c-5ps2j -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-780909 exec mysql-64454c8b5c-5ps2j -- mysql -ppassword -e "show databases;": exit status 1 (248.090949ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-780909 exec mysql-64454c8b5c-5ps2j -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/402313/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /etc/test/nested/copy/402313/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/402313.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /etc/ssl/certs/402313.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/402313.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /usr/share/ca-certificates/402313.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4023132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /etc/ssl/certs/4023132.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4023132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /usr/share/ca-certificates/4023132.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-780909 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh "sudo systemctl is-active docker": exit status 1 (237.982758ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh "sudo systemctl is-active containerd": exit status 1 (196.999215ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780909 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-780909
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-780909
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780909 image ls --format short --alsologtostderr:
I0731 18:34:18.472806  413121 out.go:291] Setting OutFile to fd 1 ...
I0731 18:34:18.472942  413121 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:18.472952  413121 out.go:304] Setting ErrFile to fd 2...
I0731 18:34:18.472957  413121 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:18.473167  413121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
I0731 18:34:18.473730  413121 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:18.473831  413121 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:18.474203  413121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:18.474258  413121 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:18.489701  413121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44421
I0731 18:34:18.490239  413121 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:18.490804  413121 main.go:141] libmachine: Using API Version  1
I0731 18:34:18.490829  413121 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:18.491166  413121 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:18.491352  413121 main.go:141] libmachine: (functional-780909) Calling .GetState
I0731 18:34:18.493218  413121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:18.493263  413121 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:18.508958  413121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36459
I0731 18:34:18.509442  413121 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:18.509954  413121 main.go:141] libmachine: Using API Version  1
I0731 18:34:18.509991  413121 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:18.510328  413121 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:18.510532  413121 main.go:141] libmachine: (functional-780909) Calling .DriverName
I0731 18:34:18.510735  413121 ssh_runner.go:195] Run: systemctl --version
I0731 18:34:18.510762  413121 main.go:141] libmachine: (functional-780909) Calling .GetSSHHostname
I0731 18:34:18.513620  413121 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:18.513984  413121 main.go:141] libmachine: (functional-780909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:a7:fd", ip: ""} in network mk-functional-780909: {Iface:virbr1 ExpiryTime:2024-07-31 19:30:55 +0000 UTC Type:0 Mac:52:54:00:73:a7:fd Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-780909 Clientid:01:52:54:00:73:a7:fd}
I0731 18:34:18.514023  413121 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined IP address 192.168.39.37 and MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:18.514183  413121 main.go:141] libmachine: (functional-780909) Calling .GetSSHPort
I0731 18:34:18.514326  413121 main.go:141] libmachine: (functional-780909) Calling .GetSSHKeyPath
I0731 18:34:18.514475  413121 main.go:141] libmachine: (functional-780909) Calling .GetSSHUsername
I0731 18:34:18.514602  413121 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/functional-780909/id_rsa Username:docker}
I0731 18:34:18.615060  413121 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 18:34:18.652312  413121 main.go:141] libmachine: Making call to close driver server
I0731 18:34:18.652330  413121 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:18.652660  413121 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:18.652693  413121 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
I0731 18:34:18.652696  413121 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:18.652755  413121 main.go:141] libmachine: Making call to close driver server
I0731 18:34:18.652766  413121 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:18.653056  413121 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:18.653085  413121 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:18.653084  413121 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780909 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-780909  | 6811f8512a0e9 | 3.33kB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| localhost/my-image                      | functional-780909  | 1ecf016d2c0f1 | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/kicbase/echo-server           | functional-780909  | 9056ab77afb8e | 4.94MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780909 image ls --format table --alsologtostderr:
I0731 18:34:22.576543  413320 out.go:291] Setting OutFile to fd 1 ...
I0731 18:34:22.576666  413320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:22.576675  413320 out.go:304] Setting ErrFile to fd 2...
I0731 18:34:22.576680  413320 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:22.576847  413320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
I0731 18:34:22.577455  413320 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:22.577558  413320 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:22.577986  413320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:22.578040  413320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:22.594980  413320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
I0731 18:34:22.595437  413320 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:22.596088  413320 main.go:141] libmachine: Using API Version  1
I0731 18:34:22.596127  413320 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:22.596528  413320 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:22.596767  413320 main.go:141] libmachine: (functional-780909) Calling .GetState
I0731 18:34:22.598616  413320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:22.598688  413320 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:22.614940  413320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33301
I0731 18:34:22.615401  413320 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:22.615903  413320 main.go:141] libmachine: Using API Version  1
I0731 18:34:22.615927  413320 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:22.616258  413320 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:22.616498  413320 main.go:141] libmachine: (functional-780909) Calling .DriverName
I0731 18:34:22.616842  413320 ssh_runner.go:195] Run: systemctl --version
I0731 18:34:22.616879  413320 main.go:141] libmachine: (functional-780909) Calling .GetSSHHostname
I0731 18:34:22.619664  413320 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:22.620157  413320 main.go:141] libmachine: (functional-780909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:a7:fd", ip: ""} in network mk-functional-780909: {Iface:virbr1 ExpiryTime:2024-07-31 19:30:55 +0000 UTC Type:0 Mac:52:54:00:73:a7:fd Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-780909 Clientid:01:52:54:00:73:a7:fd}
I0731 18:34:22.620185  413320 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined IP address 192.168.39.37 and MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:22.620311  413320 main.go:141] libmachine: (functional-780909) Calling .GetSSHPort
I0731 18:34:22.620510  413320 main.go:141] libmachine: (functional-780909) Calling .GetSSHKeyPath
I0731 18:34:22.620676  413320 main.go:141] libmachine: (functional-780909) Calling .GetSSHUsername
I0731 18:34:22.620837  413320 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/functional-780909/id_rsa Username:docker}
I0731 18:34:22.702785  413320 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 18:34:22.747406  413320 main.go:141] libmachine: Making call to close driver server
I0731 18:34:22.747429  413320 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:22.747728  413320 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:22.747747  413320 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:22.747746  413320 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
I0731 18:34:22.747766  413320 main.go:141] libmachine: Making call to close driver server
I0731 18:34:22.747857  413320 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:22.748059  413320 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:22.748072  413320 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:22.748084  413320 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780909 image ls --format json --alsologtostderr:
[{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"rep
oTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"f0d116af9b174631e83aacf5e4c1fa295e42a82a8a06a250c893f848aabf4ec3","repoDigests":["docker.io/library/4955ca6faf28c335679d0c862741f751d21ed59ac718474a070c00f9db36e5cc-tmp@sha256:95698ef5f585cf081803215b51e2fafa2
83c86aadc46481463e27c2f74660c06"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6811f8512a0e98a669b2c8d8acd121aa84fdb7ec2571ed10732d825d1ffea18a","repoDigests":["localhost/minikube-local-cache-test@sha256:7253383379f6b9e0d98878758e25306c33544b8f1f22899653c8e941a2804bc7"],"repoTags":["localhost/minikube-local-cache-test:functional-780909"],"size":"3330"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io
/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-780909"],"size":"4943877"},{"id":"1ecf016d2c0f197046e1280aea64e9d74128defe97fb573e4103fe3472138e71","repoDigests":["localhost/my-image@sha256:a3b4b0173cd4e1e418af05f04d331aab1781ce1d5fa29bee29d7bb49810b9afc"],"repoTags":["localhost/my-image:functional-780909"],"size":"1468600"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube
-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c
441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"55bb025d2cfa592b9381d01e12
2e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780909 image ls --format json --alsologtostderr:
I0731 18:34:22.364479  413296 out.go:291] Setting OutFile to fd 1 ...
I0731 18:34:22.364730  413296 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:22.364739  413296 out.go:304] Setting ErrFile to fd 2...
I0731 18:34:22.364743  413296 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:22.364912  413296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
I0731 18:34:22.365464  413296 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:22.365564  413296 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:22.366031  413296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:22.366081  413296 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:22.381307  413296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
I0731 18:34:22.381768  413296 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:22.382321  413296 main.go:141] libmachine: Using API Version  1
I0731 18:34:22.382342  413296 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:22.382652  413296 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:22.382810  413296 main.go:141] libmachine: (functional-780909) Calling .GetState
I0731 18:34:22.384755  413296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:22.384804  413296 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:22.400571  413296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
I0731 18:34:22.401073  413296 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:22.401540  413296 main.go:141] libmachine: Using API Version  1
I0731 18:34:22.401570  413296 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:22.401934  413296 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:22.402145  413296 main.go:141] libmachine: (functional-780909) Calling .DriverName
I0731 18:34:22.402358  413296 ssh_runner.go:195] Run: systemctl --version
I0731 18:34:22.402391  413296 main.go:141] libmachine: (functional-780909) Calling .GetSSHHostname
I0731 18:34:22.404992  413296 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:22.405395  413296 main.go:141] libmachine: (functional-780909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:a7:fd", ip: ""} in network mk-functional-780909: {Iface:virbr1 ExpiryTime:2024-07-31 19:30:55 +0000 UTC Type:0 Mac:52:54:00:73:a7:fd Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-780909 Clientid:01:52:54:00:73:a7:fd}
I0731 18:34:22.405427  413296 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined IP address 192.168.39.37 and MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:22.405526  413296 main.go:141] libmachine: (functional-780909) Calling .GetSSHPort
I0731 18:34:22.405715  413296 main.go:141] libmachine: (functional-780909) Calling .GetSSHKeyPath
I0731 18:34:22.405902  413296 main.go:141] libmachine: (functional-780909) Calling .GetSSHUsername
I0731 18:34:22.406065  413296 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/functional-780909/id_rsa Username:docker}
I0731 18:34:22.486971  413296 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 18:34:22.524853  413296 main.go:141] libmachine: Making call to close driver server
I0731 18:34:22.524876  413296 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:22.525168  413296 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:22.525188  413296 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:22.525204  413296 main.go:141] libmachine: Making call to close driver server
I0731 18:34:22.525213  413296 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:22.525211  413296 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
I0731 18:34:22.525455  413296 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:22.525469  413296 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780909 image ls --format yaml --alsologtostderr:
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-780909
size: "4943877"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 6811f8512a0e98a669b2c8d8acd121aa84fdb7ec2571ed10732d825d1ffea18a
repoDigests:
- localhost/minikube-local-cache-test@sha256:7253383379f6b9e0d98878758e25306c33544b8f1f22899653c8e941a2804bc7
repoTags:
- localhost/minikube-local-cache-test:functional-780909
size: "3330"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780909 image ls --format yaml --alsologtostderr:
I0731 18:34:18.699326  413145 out.go:291] Setting OutFile to fd 1 ...
I0731 18:34:18.699453  413145 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:18.699465  413145 out.go:304] Setting ErrFile to fd 2...
I0731 18:34:18.699471  413145 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:18.699652  413145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
I0731 18:34:18.700236  413145 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:18.700333  413145 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:18.700723  413145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:18.700781  413145 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:18.716151  413145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
I0731 18:34:18.716698  413145 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:18.717353  413145 main.go:141] libmachine: Using API Version  1
I0731 18:34:18.717378  413145 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:18.717779  413145 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:18.717976  413145 main.go:141] libmachine: (functional-780909) Calling .GetState
I0731 18:34:18.719971  413145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:18.720020  413145 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:18.735338  413145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
I0731 18:34:18.735782  413145 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:18.736388  413145 main.go:141] libmachine: Using API Version  1
I0731 18:34:18.736417  413145 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:18.736762  413145 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:18.737005  413145 main.go:141] libmachine: (functional-780909) Calling .DriverName
I0731 18:34:18.737230  413145 ssh_runner.go:195] Run: systemctl --version
I0731 18:34:18.737258  413145 main.go:141] libmachine: (functional-780909) Calling .GetSSHHostname
I0731 18:34:18.740437  413145 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:18.740921  413145 main.go:141] libmachine: (functional-780909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:a7:fd", ip: ""} in network mk-functional-780909: {Iface:virbr1 ExpiryTime:2024-07-31 19:30:55 +0000 UTC Type:0 Mac:52:54:00:73:a7:fd Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-780909 Clientid:01:52:54:00:73:a7:fd}
I0731 18:34:18.740948  413145 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined IP address 192.168.39.37 and MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:18.741060  413145 main.go:141] libmachine: (functional-780909) Calling .GetSSHPort
I0731 18:34:18.741230  413145 main.go:141] libmachine: (functional-780909) Calling .GetSSHKeyPath
I0731 18:34:18.741394  413145 main.go:141] libmachine: (functional-780909) Calling .GetSSHUsername
I0731 18:34:18.741515  413145 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/functional-780909/id_rsa Username:docker}
I0731 18:34:18.827548  413145 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 18:34:18.866165  413145 main.go:141] libmachine: Making call to close driver server
I0731 18:34:18.866183  413145 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:18.866509  413145 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:18.866534  413145 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:18.866541  413145 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
I0731 18:34:18.866559  413145 main.go:141] libmachine: Making call to close driver server
I0731 18:34:18.866570  413145 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:18.866841  413145 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:18.866857  413145 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
I0731 18:34:18.866859  413145 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh pgrep buildkitd: exit status 1 (195.463449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image build -t localhost/my-image:functional-780909 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 image build -t localhost/my-image:functional-780909 testdata/build --alsologtostderr: (3.031389162s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-780909 image build -t localhost/my-image:functional-780909 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f0d116af9b1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-780909
--> 1ecf016d2c0
Successfully tagged localhost/my-image:functional-780909
1ecf016d2c0f197046e1280aea64e9d74128defe97fb573e4103fe3472138e71
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-780909 image build -t localhost/my-image:functional-780909 testdata/build --alsologtostderr:
I0731 18:34:19.109516  413199 out.go:291] Setting OutFile to fd 1 ...
I0731 18:34:19.109646  413199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:19.109656  413199 out.go:304] Setting ErrFile to fd 2...
I0731 18:34:19.109662  413199 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 18:34:19.109942  413199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
I0731 18:34:19.110633  413199 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:19.111245  413199 config.go:182] Loaded profile config "functional-780909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 18:34:19.111668  413199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:19.111704  413199 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:19.127477  413199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
I0731 18:34:19.127903  413199 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:19.128580  413199 main.go:141] libmachine: Using API Version  1
I0731 18:34:19.128605  413199 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:19.129020  413199 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:19.129231  413199 main.go:141] libmachine: (functional-780909) Calling .GetState
I0731 18:34:19.131128  413199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 18:34:19.131168  413199 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 18:34:19.146681  413199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32791
I0731 18:34:19.147067  413199 main.go:141] libmachine: () Calling .GetVersion
I0731 18:34:19.147531  413199 main.go:141] libmachine: Using API Version  1
I0731 18:34:19.147552  413199 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 18:34:19.147870  413199 main.go:141] libmachine: () Calling .GetMachineName
I0731 18:34:19.148055  413199 main.go:141] libmachine: (functional-780909) Calling .DriverName
I0731 18:34:19.148287  413199 ssh_runner.go:195] Run: systemctl --version
I0731 18:34:19.148317  413199 main.go:141] libmachine: (functional-780909) Calling .GetSSHHostname
I0731 18:34:19.151158  413199 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:19.151560  413199 main.go:141] libmachine: (functional-780909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:a7:fd", ip: ""} in network mk-functional-780909: {Iface:virbr1 ExpiryTime:2024-07-31 19:30:55 +0000 UTC Type:0 Mac:52:54:00:73:a7:fd Iaid: IPaddr:192.168.39.37 Prefix:24 Hostname:functional-780909 Clientid:01:52:54:00:73:a7:fd}
I0731 18:34:19.151585  413199 main.go:141] libmachine: (functional-780909) DBG | domain functional-780909 has defined IP address 192.168.39.37 and MAC address 52:54:00:73:a7:fd in network mk-functional-780909
I0731 18:34:19.151728  413199 main.go:141] libmachine: (functional-780909) Calling .GetSSHPort
I0731 18:34:19.151905  413199 main.go:141] libmachine: (functional-780909) Calling .GetSSHKeyPath
I0731 18:34:19.152061  413199 main.go:141] libmachine: (functional-780909) Calling .GetSSHUsername
I0731 18:34:19.152211  413199 sshutil.go:53] new ssh client: &{IP:192.168.39.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/functional-780909/id_rsa Username:docker}
I0731 18:34:19.235343  413199 build_images.go:161] Building image from path: /tmp/build.2159472292.tar
I0731 18:34:19.235411  413199 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 18:34:19.246202  413199 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2159472292.tar
I0731 18:34:19.250558  413199 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2159472292.tar: stat -c "%s %y" /var/lib/minikube/build/build.2159472292.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2159472292.tar': No such file or directory
I0731 18:34:19.250600  413199 ssh_runner.go:362] scp /tmp/build.2159472292.tar --> /var/lib/minikube/build/build.2159472292.tar (3072 bytes)
I0731 18:34:19.277536  413199 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2159472292
I0731 18:34:19.287754  413199 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2159472292 -xf /var/lib/minikube/build/build.2159472292.tar
I0731 18:34:19.297745  413199 crio.go:315] Building image: /var/lib/minikube/build/build.2159472292
I0731 18:34:19.297824  413199 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-780909 /var/lib/minikube/build/build.2159472292 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0731 18:34:22.070705  413199 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-780909 /var/lib/minikube/build/build.2159472292 --cgroup-manager=cgroupfs: (2.772844078s)
I0731 18:34:22.070836  413199 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2159472292
I0731 18:34:22.082057  413199 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2159472292.tar
I0731 18:34:22.092843  413199 build_images.go:217] Built localhost/my-image:functional-780909 from /tmp/build.2159472292.tar
I0731 18:34:22.092891  413199 build_images.go:133] succeeded building to: functional-780909
I0731 18:34:22.092900  413199 build_images.go:134] failed building to: 
I0731 18:34:22.092932  413199 main.go:141] libmachine: Making call to close driver server
I0731 18:34:22.092949  413199 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:22.093260  413199 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:22.093289  413199 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 18:34:22.093299  413199 main.go:141] libmachine: Making call to close driver server
I0731 18:34:22.093307  413199 main.go:141] libmachine: (functional-780909) Calling .Close
I0731 18:34:22.093585  413199 main.go:141] libmachine: (functional-780909) DBG | Closing plugin on server side
I0731 18:34:22.093627  413199 main.go:141] libmachine: Successfully made call to close driver server
I0731 18:34:22.093639  413199 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.917750053s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-780909
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (24.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-780909 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-780909 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-62q6x" [2429f615-1b41-4735-8ef6-5a357a0c3726] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-62q6x" [2429f615-1b41-4735-8ef6-5a357a0c3726] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.004356033s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (24.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image load --daemon docker.io/kicbase/echo-server:functional-780909 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 image load --daemon docker.io/kicbase/echo-server:functional-780909 --alsologtostderr: (1.02044486s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image load --daemon docker.io/kicbase/echo-server:functional-780909 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-780909
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image load --daemon docker.io/kicbase/echo-server:functional-780909 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 image load --daemon docker.io/kicbase/echo-server:functional-780909 --alsologtostderr: (3.061992076s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image save docker.io/kicbase/echo-server:functional-780909 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 image save docker.io/kicbase/echo-server:functional-780909 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.98613633s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image rm docker.io/kicbase/echo-server:functional-780909 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-780909 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.680833808s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-780909
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 image save --daemon docker.io/kicbase/echo-server:functional-780909 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-780909
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 service list -o json
functional_test.go:1490: Took "454.378259ms" to run "out/minikube-linux-amd64 -p functional-780909 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.37:31451
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.37:31451
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "226.326003ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "49.53215ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "238.041193ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "47.738743ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdany-port1380927506/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722450855782750932" to /tmp/TestFunctionalparallelMountCmdany-port1380927506/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722450855782750932" to /tmp/TestFunctionalparallelMountCmdany-port1380927506/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722450855782750932" to /tmp/TestFunctionalparallelMountCmdany-port1380927506/001/test-1722450855782750932
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (206.575185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 18:34 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 18:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 18:34 test-1722450855782750932
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh cat /mount-9p/test-1722450855782750932
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-780909 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [331911a9-d700-48e6-9b89-032cd6cb7ca9] Pending
helpers_test.go:344: "busybox-mount" [331911a9-d700-48e6-9b89-032cd6cb7ca9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [331911a9-d700-48e6-9b89-032cd6cb7ca9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [331911a9-d700-48e6-9b89-032cd6cb7ca9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003573563s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-780909 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdany-port1380927506/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdspecific-port3323797113/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (246.83921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdspecific-port3323797113/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh "sudo umount -f /mount-9p": exit status 1 (250.193536ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-780909 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdspecific-port3323797113/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdVerifyCleanup48876769/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdVerifyCleanup48876769/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdVerifyCleanup48876769/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T" /mount1: exit status 1 (321.217702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-780909 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-780909 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdVerifyCleanup48876769/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdVerifyCleanup48876769/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-780909 /tmp/TestFunctionalparallelMountCmdVerifyCleanup48876769/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-780909
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-780909
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-780909
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-326651 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 18:35:32.741688  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 18:36:00.429092  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-326651 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m35.41060367s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (216.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-326651 -- rollout status deployment/busybox: (4.178015429s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-cs6t8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-lgg6t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-mknlp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-cs6t8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-lgg6t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-mknlp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-cs6t8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-lgg6t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-mknlp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-cs6t8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-cs6t8 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-lgg6t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-lgg6t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-mknlp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-326651 -- exec busybox-fc5497c4f-mknlp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-326651 -v=7 --alsologtostderr
E0731 18:38:48.017764  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.023084  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.033404  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.053708  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.094033  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.174346  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.334813  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:48.655378  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:49.296167  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:50.576499  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:53.137429  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:38:58.258438  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:39:08.499282  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-326651 -v=7 --alsologtostderr: (59.377302405s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-326651 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp testdata/cp-test.txt ha-326651:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651:/home/docker/cp-test.txt ha-326651-m02:/home/docker/cp-test_ha-326651_ha-326651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test_ha-326651_ha-326651-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651:/home/docker/cp-test.txt ha-326651-m03:/home/docker/cp-test_ha-326651_ha-326651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test_ha-326651_ha-326651-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651:/home/docker/cp-test.txt ha-326651-m04:/home/docker/cp-test_ha-326651_ha-326651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test_ha-326651_ha-326651-m04.txt"
E0731 18:39:28.979793  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp testdata/cp-test.txt ha-326651-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m02:/home/docker/cp-test.txt ha-326651:/home/docker/cp-test_ha-326651-m02_ha-326651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test_ha-326651-m02_ha-326651.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m02:/home/docker/cp-test.txt ha-326651-m03:/home/docker/cp-test_ha-326651-m02_ha-326651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test_ha-326651-m02_ha-326651-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m02:/home/docker/cp-test.txt ha-326651-m04:/home/docker/cp-test_ha-326651-m02_ha-326651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test_ha-326651-m02_ha-326651-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp testdata/cp-test.txt ha-326651-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt ha-326651:/home/docker/cp-test_ha-326651-m03_ha-326651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test_ha-326651-m03_ha-326651.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt ha-326651-m02:/home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test_ha-326651-m03_ha-326651-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m03:/home/docker/cp-test.txt ha-326651-m04:/home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test_ha-326651-m03_ha-326651-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp testdata/cp-test.txt ha-326651-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1307423699/001/cp-test_ha-326651-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt ha-326651:/home/docker/cp-test_ha-326651-m04_ha-326651.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651 "sudo cat /home/docker/cp-test_ha-326651-m04_ha-326651.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt ha-326651-m02:/home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m02 "sudo cat /home/docker/cp-test_ha-326651-m04_ha-326651-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 cp ha-326651-m04:/home/docker/cp-test.txt ha-326651-m03:/home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 ssh -n ha-326651-m03 "sudo cat /home/docker/cp-test_ha-326651-m04_ha-326651-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.50080015s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-326651 node delete m03 -v=7 --alsologtostderr: (16.508200812s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (280.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-326651 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 18:53:48.017608  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:55:11.064923  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
E0731 18:55:32.742286  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-326651 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m39.454155354s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (280.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-326651 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-326651 --control-plane -v=7 --alsologtostderr: (1m18.452909478s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-326651 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-835884 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0731 18:58:48.017864  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-835884 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.368291391s)
--- PASS: TestJSONOutput/start/Command (97.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-835884 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-835884 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-835884 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-835884 --output=json --user=testUser: (6.691335036s)
--- PASS: TestJSONOutput/stop/Command (6.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-877806 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-877806 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.300584ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"40cbe60e-476d-4137-9385-d0b4b1b8441f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-877806] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e79f82f0-1465-4959-8838-2ff0f0666d5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19356"}}
	{"specversion":"1.0","id":"de1e6a00-1fc6-406a-bdb7-0aa2b260ea09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f01afee-b43b-44a4-80d0-49ec2648bbd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig"}}
	{"specversion":"1.0","id":"96781208-4ba8-41ce-ba4b-53ba840d11c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube"}}
	{"specversion":"1.0","id":"1a3a6dcb-746b-47f1-b144-291545817856","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"89d69864-d51c-4741-ac21-32aa83d86c52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"742f2bbe-1e10-4d2c-b0f4-789ed64f74a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-877806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-877806
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (91.36s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-270407 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-270407 --driver=kvm2  --container-runtime=crio: (44.435979849s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-272938 --driver=kvm2  --container-runtime=crio
E0731 19:00:32.741686  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-272938 --driver=kvm2  --container-runtime=crio: (43.990175364s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-270407
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-272938
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-272938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-272938
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-272938: (1.015484739s)
helpers_test.go:175: Cleaning up "first-270407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-270407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-270407: (1.003251872s)
--- PASS: TestMinikubeProfile (91.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-619658 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-619658 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.178945494s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-619658 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-619658 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.76s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638188 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638188 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.755347443s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638188 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638188 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-619658 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638188 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638188 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-638188
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-638188: (1.270921924s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.74s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-638188
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-638188: (20.744249418s)
--- PASS: TestMountStart/serial/RestartStopped (21.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638188 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-638188 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-741077 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 19:03:35.790817  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
E0731 19:03:48.017478  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-741077 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m7.289260125s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-741077 -- rollout status deployment/busybox: (3.990792114s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-99dqx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-z2dlb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-99dqx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-z2dlb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-99dqx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-z2dlb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-99dqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-99dqx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-z2dlb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-741077 -- exec busybox-fc5497c4f-z2dlb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-741077 -v 3 --alsologtostderr
E0731 19:05:32.742117  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-741077 -v 3 --alsologtostderr: (51.625430014s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-741077 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp testdata/cp-test.txt multinode-741077:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile510041860/001/cp-test_multinode-741077.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077:/home/docker/cp-test.txt multinode-741077-m02:/home/docker/cp-test_multinode-741077_multinode-741077-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m02 "sudo cat /home/docker/cp-test_multinode-741077_multinode-741077-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077:/home/docker/cp-test.txt multinode-741077-m03:/home/docker/cp-test_multinode-741077_multinode-741077-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m03 "sudo cat /home/docker/cp-test_multinode-741077_multinode-741077-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp testdata/cp-test.txt multinode-741077-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile510041860/001/cp-test_multinode-741077-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt multinode-741077:/home/docker/cp-test_multinode-741077-m02_multinode-741077.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077 "sudo cat /home/docker/cp-test_multinode-741077-m02_multinode-741077.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077-m02:/home/docker/cp-test.txt multinode-741077-m03:/home/docker/cp-test_multinode-741077-m02_multinode-741077-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m03 "sudo cat /home/docker/cp-test_multinode-741077-m02_multinode-741077-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp testdata/cp-test.txt multinode-741077-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile510041860/001/cp-test_multinode-741077-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt multinode-741077:/home/docker/cp-test_multinode-741077-m03_multinode-741077.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077 "sudo cat /home/docker/cp-test_multinode-741077-m03_multinode-741077.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 cp multinode-741077-m03:/home/docker/cp-test.txt multinode-741077-m02:/home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 ssh -n multinode-741077-m02 "sudo cat /home/docker/cp-test_multinode-741077-m03_multinode-741077-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-741077 node stop m03: (1.481390351s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-741077 status: exit status 7 (425.944504ms)

                                                
                                                
-- stdout --
	multinode-741077
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-741077-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-741077-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-741077 status --alsologtostderr: exit status 7 (430.836238ms)

                                                
                                                
-- stdout --
	multinode-741077
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-741077-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-741077-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:06:01.256557  430985 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:06:01.257064  430985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:06:01.257080  430985 out.go:304] Setting ErrFile to fd 2...
	I0731 19:06:01.257087  430985 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:06:01.257325  430985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:06:01.257530  430985 out.go:298] Setting JSON to false
	I0731 19:06:01.257561  430985 mustload.go:65] Loading cluster: multinode-741077
	I0731 19:06:01.257661  430985 notify.go:220] Checking for updates...
	I0731 19:06:01.257914  430985 config.go:182] Loaded profile config "multinode-741077": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:06:01.257928  430985 status.go:255] checking status of multinode-741077 ...
	I0731 19:06:01.258302  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.258371  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.274265  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
	I0731 19:06:01.274790  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.275356  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.275376  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.275730  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.275953  430985 main.go:141] libmachine: (multinode-741077) Calling .GetState
	I0731 19:06:01.277476  430985 status.go:330] multinode-741077 host status = "Running" (err=<nil>)
	I0731 19:06:01.277497  430985 host.go:66] Checking if "multinode-741077" exists ...
	I0731 19:06:01.277914  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.277960  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.294547  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0731 19:06:01.294990  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.295480  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.295507  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.295794  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.295991  430985 main.go:141] libmachine: (multinode-741077) Calling .GetIP
	I0731 19:06:01.299085  430985 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:06:01.299533  430985 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:06:01.299557  430985 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:06:01.299651  430985 host.go:66] Checking if "multinode-741077" exists ...
	I0731 19:06:01.299970  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.300015  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.316020  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33911
	I0731 19:06:01.316528  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.317118  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.317144  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.317472  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.317669  430985 main.go:141] libmachine: (multinode-741077) Calling .DriverName
	I0731 19:06:01.317910  430985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:06:01.317936  430985 main.go:141] libmachine: (multinode-741077) Calling .GetSSHHostname
	I0731 19:06:01.321043  430985 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:06:01.321556  430985 main.go:141] libmachine: (multinode-741077) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:e7:03", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:03:00 +0000 UTC Type:0 Mac:52:54:00:e7:e7:03 Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:multinode-741077 Clientid:01:52:54:00:e7:e7:03}
	I0731 19:06:01.321593  430985 main.go:141] libmachine: (multinode-741077) DBG | domain multinode-741077 has defined IP address 192.168.39.55 and MAC address 52:54:00:e7:e7:03 in network mk-multinode-741077
	I0731 19:06:01.321730  430985 main.go:141] libmachine: (multinode-741077) Calling .GetSSHPort
	I0731 19:06:01.321917  430985 main.go:141] libmachine: (multinode-741077) Calling .GetSSHKeyPath
	I0731 19:06:01.322068  430985 main.go:141] libmachine: (multinode-741077) Calling .GetSSHUsername
	I0731 19:06:01.322212  430985 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077/id_rsa Username:docker}
	I0731 19:06:01.400508  430985 ssh_runner.go:195] Run: systemctl --version
	I0731 19:06:01.406901  430985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:06:01.422877  430985 kubeconfig.go:125] found "multinode-741077" server: "https://192.168.39.55:8443"
	I0731 19:06:01.422912  430985 api_server.go:166] Checking apiserver status ...
	I0731 19:06:01.422967  430985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 19:06:01.438919  430985 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0731 19:06:01.451697  430985 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 19:06:01.451749  430985 ssh_runner.go:195] Run: ls
	I0731 19:06:01.456264  430985 api_server.go:253] Checking apiserver healthz at https://192.168.39.55:8443/healthz ...
	I0731 19:06:01.460341  430985 api_server.go:279] https://192.168.39.55:8443/healthz returned 200:
	ok
	I0731 19:06:01.460370  430985 status.go:422] multinode-741077 apiserver status = Running (err=<nil>)
	I0731 19:06:01.460400  430985 status.go:257] multinode-741077 status: &{Name:multinode-741077 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:06:01.460435  430985 status.go:255] checking status of multinode-741077-m02 ...
	I0731 19:06:01.460832  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.460887  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.477805  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34209
	I0731 19:06:01.478227  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.478681  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.478707  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.479023  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.479239  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .GetState
	I0731 19:06:01.480828  430985 status.go:330] multinode-741077-m02 host status = "Running" (err=<nil>)
	I0731 19:06:01.480847  430985 host.go:66] Checking if "multinode-741077-m02" exists ...
	I0731 19:06:01.481135  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.481170  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.498034  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35763
	I0731 19:06:01.498454  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.499063  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.499097  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.499420  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.499601  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .GetIP
	I0731 19:06:01.502277  430985 main.go:141] libmachine: (multinode-741077-m02) DBG | domain multinode-741077-m02 has defined MAC address 52:54:00:cd:3f:3e in network mk-multinode-741077
	I0731 19:06:01.502667  430985 main.go:141] libmachine: (multinode-741077-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:3f:3e", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:04:14 +0000 UTC Type:0 Mac:52:54:00:cd:3f:3e Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:multinode-741077-m02 Clientid:01:52:54:00:cd:3f:3e}
	I0731 19:06:01.502698  430985 main.go:141] libmachine: (multinode-741077-m02) DBG | domain multinode-741077-m02 has defined IP address 192.168.39.72 and MAC address 52:54:00:cd:3f:3e in network mk-multinode-741077
	I0731 19:06:01.502787  430985 host.go:66] Checking if "multinode-741077-m02" exists ...
	I0731 19:06:01.503114  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.503149  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.518448  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0731 19:06:01.518846  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.519288  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.519310  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.519598  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.519764  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .DriverName
	I0731 19:06:01.519950  430985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 19:06:01.519975  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .GetSSHHostname
	I0731 19:06:01.522579  430985 main.go:141] libmachine: (multinode-741077-m02) DBG | domain multinode-741077-m02 has defined MAC address 52:54:00:cd:3f:3e in network mk-multinode-741077
	I0731 19:06:01.523022  430985 main.go:141] libmachine: (multinode-741077-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:3f:3e", ip: ""} in network mk-multinode-741077: {Iface:virbr1 ExpiryTime:2024-07-31 20:04:14 +0000 UTC Type:0 Mac:52:54:00:cd:3f:3e Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:multinode-741077-m02 Clientid:01:52:54:00:cd:3f:3e}
	I0731 19:06:01.523065  430985 main.go:141] libmachine: (multinode-741077-m02) DBG | domain multinode-741077-m02 has defined IP address 192.168.39.72 and MAC address 52:54:00:cd:3f:3e in network mk-multinode-741077
	I0731 19:06:01.523217  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .GetSSHPort
	I0731 19:06:01.523381  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .GetSSHKeyPath
	I0731 19:06:01.523557  430985 main.go:141] libmachine: (multinode-741077-m02) Calling .GetSSHUsername
	I0731 19:06:01.523689  430985 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19356-395032/.minikube/machines/multinode-741077-m02/id_rsa Username:docker}
	I0731 19:06:01.603984  430985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 19:06:01.622989  430985 status.go:257] multinode-741077-m02 status: &{Name:multinode-741077-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 19:06:01.623024  430985 status.go:255] checking status of multinode-741077-m03 ...
	I0731 19:06:01.623330  430985 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 19:06:01.623367  430985 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 19:06:01.638866  430985 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
	I0731 19:06:01.639338  430985 main.go:141] libmachine: () Calling .GetVersion
	I0731 19:06:01.639859  430985 main.go:141] libmachine: Using API Version  1
	I0731 19:06:01.639884  430985 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 19:06:01.640150  430985 main.go:141] libmachine: () Calling .GetMachineName
	I0731 19:06:01.640400  430985 main.go:141] libmachine: (multinode-741077-m03) Calling .GetState
	I0731 19:06:01.641986  430985 status.go:330] multinode-741077-m03 host status = "Stopped" (err=<nil>)
	I0731 19:06:01.642003  430985 status.go:343] host is not running, skipping remaining checks
	I0731 19:06:01.642010  430985 status.go:257] multinode-741077-m03 status: &{Name:multinode-741077-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-741077 node start m03 -v=7 --alsologtostderr: (38.680216578s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-741077 node delete m03: (1.94356167s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-741077 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 19:15:32.746250  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-741077 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.790300375s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-741077 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.33s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-741077
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-741077-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-741077-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.471679ms)

                                                
                                                
-- stdout --
	* [multinode-741077-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-741077-m02' is duplicated with machine name 'multinode-741077-m02' in profile 'multinode-741077'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-741077-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-741077-m03 --driver=kvm2  --container-runtime=crio: (44.078245941s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-741077
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-741077: exit status 80 (219.037295ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-741077 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-741077-m03 already exists in multinode-741077-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-741077-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.22s)

                                                
                                    
x
+
TestScheduledStopUnix (115.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-468134 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-468134 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.826208276s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-468134 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-468134 -n scheduled-stop-468134
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-468134 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-468134 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-468134 -n scheduled-stop-468134
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-468134
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-468134 --schedule 15s
E0731 19:23:48.017637  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-468134
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-468134: exit status 7 (70.97014ms)

                                                
                                                
-- stdout --
	scheduled-stop-468134
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-468134 -n scheduled-stop-468134
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-468134 -n scheduled-stop-468134: exit status 7 (62.580606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-468134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-468134
--- PASS: TestScheduledStopUnix (115.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2625258772 start -p running-upgrade-043979 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0731 19:25:32.742050  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/addons-469211/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2625258772 start -p running-upgrade-043979 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m48.664036123s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-043979 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-043979 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m42.984426292s)
helpers_test.go:175: Cleaning up "running-upgrade-043979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-043979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-043979: (1.228224071s)
--- PASS: TestRunningBinaryUpgrade (215.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-978325 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-978325 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (84.016293ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-978325] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-978325 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-978325 --driver=kvm2  --container-runtime=crio: (1m36.561160083s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-978325 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-170831 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-170831 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.283935ms)

                                                
                                                
-- stdout --
	* [false-170831] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19356
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 19:24:30.750768  439371 out.go:291] Setting OutFile to fd 1 ...
	I0731 19:24:30.750868  439371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:24:30.750872  439371 out.go:304] Setting ErrFile to fd 2...
	I0731 19:24:30.750877  439371 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 19:24:30.751038  439371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19356-395032/.minikube/bin
	I0731 19:24:30.751685  439371 out.go:298] Setting JSON to false
	I0731 19:24:30.752770  439371 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":11214,"bootTime":1722442657,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 19:24:30.752830  439371 start.go:139] virtualization: kvm guest
	I0731 19:24:30.755011  439371 out.go:177] * [false-170831] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 19:24:30.756362  439371 out.go:177]   - MINIKUBE_LOCATION=19356
	I0731 19:24:30.756366  439371 notify.go:220] Checking for updates...
	I0731 19:24:30.758648  439371 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 19:24:30.759883  439371 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19356-395032/kubeconfig
	I0731 19:24:30.761102  439371 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19356-395032/.minikube
	I0731 19:24:30.762357  439371 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 19:24:30.763585  439371 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 19:24:30.765254  439371 config.go:182] Loaded profile config "NoKubernetes-978325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:24:30.765417  439371 config.go:182] Loaded profile config "force-systemd-env-114834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:24:30.765573  439371 config.go:182] Loaded profile config "offline-crio-954897": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 19:24:30.765680  439371 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 19:24:30.803238  439371 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 19:24:30.804552  439371 start.go:297] selected driver: kvm2
	I0731 19:24:30.804566  439371 start.go:901] validating driver "kvm2" against <nil>
	I0731 19:24:30.804578  439371 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 19:24:30.806713  439371 out.go:177] 
	W0731 19:24:30.808007  439371 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0731 19:24:30.809253  439371 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-170831 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-170831" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-170831

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-170831"

                                                
                                                
----------------------- debugLogs end: false-170831 [took: 2.751050999s] --------------------------------
helpers_test.go:175: Cleaning up "false-170831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-170831
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (156.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2917674693 start -p stopped-upgrade-096992 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2917674693 start -p stopped-upgrade-096992 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m42.482573367s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2917674693 -p stopped-upgrade-096992 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2917674693 -p stopped-upgrade-096992 stop: (2.13484931s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-096992 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-096992 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.018713956s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (156.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (63.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-978325 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-978325 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m1.914439689s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-978325 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-978325 status -o json: exit status 2 (260.404776ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-978325","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-978325
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-978325: (1.722050091s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (63.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-978325 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-978325 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.947667776s)
--- PASS: TestNoKubernetes/serial/Start (26.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-978325 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-978325 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.181162ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.727669538s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.569497905s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-978325
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-978325: (1.412112593s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-978325 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-978325 --driver=kvm2  --container-runtime=crio: (22.095353903s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.10s)

                                                
                                    
x
+
TestPause/serial/Start (77.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-693348 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-693348 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m17.934712552s)
--- PASS: TestPause/serial/Start (77.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-096992
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-978325 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-978325 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.686711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m1.131436334s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (112.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m52.661221448s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (112.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jw7q5" [f0d373ce-bf1c-444b-a096-011ec0829402] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jw7q5" [f0d373ce-bf1c-444b-a096-011ec0829402] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005126457s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.931077915s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-4zxxc" [08b752af-1547-4505-bf75-873dba91b3ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004829756s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b4lm5" [97cfa3a7-2ee2-4469-b561-aa7272df6e19] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b4lm5" [97cfa3a7-2ee2-4469-b561-aa7272df6e19] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.008775329s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (84.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m24.82257697s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (84.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m48.742053499s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rhkj7" [2f0d7a5e-af7f-4326-a398-397d435166f4] Running
E0731 19:33:48.017850  402313 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19356-395032/.minikube/profiles/functional-780909/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006185747s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7b59f" [09ed7285-268f-4da5-b74e-f97b1c1035e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7b59f" [09ed7285-268f-4da5-b74e-f97b1c1035e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.419257831s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m29.274438493s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-95tvz" [05818a79-6524-47ad-8465-0e8b58f3a3f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-95tvz" [05818a79-6524-47ad-8465-0e8b58f3a3f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004613704s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-170831 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m2.512840175s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-f9d8v" [d5b9f54c-abf7-4abe-a097-4c53966cf534] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-f9d8v" [d5b9f54c-abf7-4abe-a097-4c53966cf534] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00442565s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2mk4j" [9a35f613-97da-4d52-a8ea-23ff0f69c5b7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005631039s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-fbml9" [331390b3-ea99-4ccf-aee1-d8110c79df7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-fbml9" [331390b3-ea99-4ccf-aee1-d8110c79df7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004883931s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-170831 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-170831 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m76x5" [aa88a40c-2242-421b-b587-31080f270bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m76x5" [aa88a40c-2242-421b-b587-31080f270bfc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005881817s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (33.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-170831 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-170831 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.193099868s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-170831 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-170831 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.174127669s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-170831 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (33.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-170831 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/278)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
263 TestNetworkPlugins/group/kubenet 2.89
272 TestNetworkPlugins/group/cilium 3.26
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-170831 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-170831" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-170831

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-170831"

                                                
                                                
----------------------- debugLogs end: kubenet-170831 [took: 2.749887077s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-170831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-170831
--- SKIP: TestNetworkPlugins/group/kubenet (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-170831 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-170831" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-170831

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-170831" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-170831"

                                                
                                                
----------------------- debugLogs end: cilium-170831 [took: 3.115187168s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-170831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-170831
--- SKIP: TestNetworkPlugins/group/cilium (3.26s)

                                                
                                    
Copied to clipboard